00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2349 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3610 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.088 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.088 The recommended git tool is: git 00:00:00.088 using credential 00000000-0000-0000-0000-000000000002 00:00:00.090 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.172 Fetching changes from the remote Git repository 00:00:00.175 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.248 Using shallow fetch with depth 1 00:00:00.248 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.249 > git --version # timeout=10 00:00:00.310 > git --version # 'git version 2.39.2' 00:00:00.310 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.350 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.350 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.006 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.017 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.030 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:08.030 > git config core.sparsecheckout # timeout=10 00:00:08.041 > git read-tree -mu HEAD # timeout=10 00:00:08.058 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:08.077 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:08.077 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:08.175 [Pipeline] Start of Pipeline 00:00:08.188 [Pipeline] library 00:00:08.189 Loading library shm_lib@master 00:00:08.189 Library shm_lib@master is cached. Copying from home. 00:00:08.201 [Pipeline] node 00:00:08.209 Running on VM-host-SM0 in /var/jenkins/workspace/ubuntu22-vg-autotest_2 00:00:08.210 [Pipeline] { 00:00:08.219 [Pipeline] catchError 00:00:08.220 [Pipeline] { 00:00:08.233 [Pipeline] wrap 00:00:08.241 [Pipeline] { 00:00:08.249 [Pipeline] stage 00:00:08.250 [Pipeline] { (Prologue) 00:00:08.268 [Pipeline] echo 00:00:08.269 Node: VM-host-SM0 00:00:08.277 [Pipeline] cleanWs 00:00:08.287 [WS-CLEANUP] Deleting project workspace... 00:00:08.287 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.292 [WS-CLEANUP] done 00:00:08.500 [Pipeline] setCustomBuildProperty 00:00:08.588 [Pipeline] httpRequest 00:00:09.222 [Pipeline] echo 00:00:09.223 Sorcerer 10.211.164.101 is alive 00:00:09.231 [Pipeline] retry 00:00:09.232 [Pipeline] { 00:00:09.243 [Pipeline] httpRequest 00:00:09.246 HttpMethod: GET 00:00:09.247 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.247 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.253 Response Code: HTTP/1.1 200 OK 00:00:09.254 Success: Status code 200 is in the accepted range: 200,404 00:00:09.254 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest_2/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:17.839 [Pipeline] } 00:00:17.855 [Pipeline] // retry 00:00:17.863 [Pipeline] sh 00:00:18.143 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:18.158 [Pipeline] httpRequest 00:00:18.610 [Pipeline] echo 00:00:18.612 Sorcerer 10.211.164.101 is alive 00:00:18.622 [Pipeline] retry 00:00:18.624 [Pipeline] { 00:00:18.640 [Pipeline] httpRequest 00:00:18.645 HttpMethod: GET 00:00:18.645 URL: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:18.646 Sending request to url: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:18.654 Response Code: HTTP/1.1 200 OK 00:00:18.655 Success: Status code 200 is in the accepted range: 200,404 00:00:18.655 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:13.601 [Pipeline] } 00:01:13.619 [Pipeline] // retry 00:01:13.627 [Pipeline] sh 00:01:13.909 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:17.230 [Pipeline] sh 00:01:17.510 + git -C spdk log --oneline -n5 00:01:17.510 c13c99a5e test: Various fixes for Fedora40 00:01:17.510 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:17.510 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:17.510 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:17.510 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:17.529 [Pipeline] writeFile 00:01:17.544 [Pipeline] sh 00:01:17.859 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:17.871 [Pipeline] sh 00:01:18.150 + cat autorun-spdk.conf 00:01:18.150 SPDK_TEST_UNITTEST=1 00:01:18.150 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.150 SPDK_TEST_NVME=1 00:01:18.150 SPDK_TEST_BLOCKDEV=1 00:01:18.150 SPDK_RUN_ASAN=1 00:01:18.150 SPDK_RUN_UBSAN=1 00:01:18.150 SPDK_TEST_RAID5=1 00:01:18.150 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.157 RUN_NIGHTLY=1 00:01:18.159 [Pipeline] } 00:01:18.173 [Pipeline] // stage 00:01:18.190 [Pipeline] stage 00:01:18.192 [Pipeline] { (Run VM) 00:01:18.205 [Pipeline] sh 00:01:18.486 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:18.486 + echo 'Start stage prepare_nvme.sh' 00:01:18.486 Start stage prepare_nvme.sh 00:01:18.486 + [[ -n 2 ]] 00:01:18.486 + disk_prefix=ex2 00:01:18.486 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest_2 ]] 00:01:18.486 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest_2/autorun-spdk.conf ]] 00:01:18.486 + source /var/jenkins/workspace/ubuntu22-vg-autotest_2/autorun-spdk.conf 00:01:18.486 ++ SPDK_TEST_UNITTEST=1 00:01:18.486 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.486 ++ SPDK_TEST_NVME=1 00:01:18.486 ++ SPDK_TEST_BLOCKDEV=1 00:01:18.486 ++ SPDK_RUN_ASAN=1 00:01:18.486 ++ SPDK_RUN_UBSAN=1 00:01:18.486 ++ SPDK_TEST_RAID5=1 00:01:18.486 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.486 ++ RUN_NIGHTLY=1 00:01:18.486 + cd /var/jenkins/workspace/ubuntu22-vg-autotest_2 00:01:18.486 + nvme_files=() 00:01:18.486 + declare -A nvme_files 00:01:18.486 + backend_dir=/var/lib/libvirt/images/backends 00:01:18.486 + nvme_files['nvme.img']=5G 00:01:18.486 + nvme_files['nvme-cmb.img']=5G 00:01:18.486 + nvme_files['nvme-multi0.img']=4G 00:01:18.486 + nvme_files['nvme-multi1.img']=4G 00:01:18.486 + nvme_files['nvme-multi2.img']=4G 00:01:18.486 + nvme_files['nvme-openstack.img']=8G 00:01:18.486 + nvme_files['nvme-zns.img']=5G 00:01:18.486 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:18.486 + (( SPDK_TEST_FTL == 1 )) 00:01:18.486 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:18.486 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:18.486 + for nvme in "${!nvme_files[@]}" 00:01:18.486 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:18.486 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.486 + for nvme in "${!nvme_files[@]}" 00:01:18.486 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:18.486 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.486 + for nvme in "${!nvme_files[@]}" 00:01:18.486 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:18.486 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:18.486 + for nvme in "${!nvme_files[@]}" 00:01:18.486 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:18.486 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.486 + for nvme in "${!nvme_files[@]}" 00:01:18.486 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:18.486 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.486 + for nvme in "${!nvme_files[@]}" 00:01:18.486 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:18.486 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.486 + for nvme in "${!nvme_files[@]}" 00:01:18.486 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:18.745 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.745 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:18.745 + echo 'End stage prepare_nvme.sh' 00:01:18.745 End stage prepare_nvme.sh 00:01:18.754 [Pipeline] sh 00:01:19.043 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:19.043 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -H -a -v -f ubuntu2204 00:01:19.043 00:01:19.043 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk/scripts/vagrant 00:01:19.043 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk 00:01:19.043 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest_2 00:01:19.043 HELP=0 00:01:19.043 DRY_RUN=0 00:01:19.043 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img, 00:01:19.043 NVME_DISKS_TYPE=nvme, 00:01:19.043 NVME_AUTO_CREATE=0 00:01:19.043 NVME_DISKS_NAMESPACES=, 00:01:19.043 NVME_CMB=, 00:01:19.043 NVME_PMR=, 00:01:19.043 NVME_ZNS=, 00:01:19.043 NVME_MS=, 00:01:19.043 NVME_FDP=, 00:01:19.043 SPDK_VAGRANT_DISTRO=ubuntu2204 00:01:19.043 SPDK_VAGRANT_VMCPU=10 00:01:19.043 SPDK_VAGRANT_VMRAM=12288 00:01:19.043 SPDK_VAGRANT_PROVIDER=libvirt 00:01:19.043 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:19.043 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:19.043 SPDK_OPENSTACK_NETWORK=0 00:01:19.043 VAGRANT_PACKAGE_BOX=0 00:01:19.043 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:19.043 FORCE_DISTRO=true 00:01:19.043 VAGRANT_BOX_VERSION= 00:01:19.043 EXTRA_VAGRANTFILES= 00:01:19.043 NIC_MODEL=e1000 00:01:19.043 00:01:19.043 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt' 00:01:19.043 /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest_2 00:01:22.334 Bringing machine 'default' up with 'libvirt' provider... 00:01:22.901 ==> default: Creating image (snapshot of base box volume). 00:01:22.902 ==> default: Creating domain with the following settings... 00:01:22.902 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1730824811_af1beebb522597d54d22 00:01:22.902 ==> default: -- Domain type: kvm 00:01:22.902 ==> default: -- Cpus: 10 00:01:22.902 ==> default: -- Feature: acpi 00:01:22.902 ==> default: -- Feature: apic 00:01:22.902 ==> default: -- Feature: pae 00:01:22.902 ==> default: -- Memory: 12288M 00:01:22.902 ==> default: -- Memory Backing: hugepages: 00:01:22.902 ==> default: -- Management MAC: 00:01:22.902 ==> default: -- Loader: 00:01:22.902 ==> default: -- Nvram: 00:01:22.902 ==> default: -- Base box: spdk/ubuntu2204 00:01:22.902 ==> default: -- Storage pool: default 00:01:22.902 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1730824811_af1beebb522597d54d22.img (20G) 00:01:22.902 ==> default: -- Volume Cache: default 00:01:22.902 ==> default: -- Kernel: 00:01:22.902 ==> default: -- Initrd: 00:01:22.902 ==> default: -- Graphics Type: vnc 00:01:22.902 ==> default: -- Graphics Port: -1 00:01:22.902 ==> default: -- Graphics IP: 127.0.0.1 00:01:22.902 ==> default: -- Graphics Password: Not defined 00:01:22.902 ==> default: -- Video Type: cirrus 00:01:22.902 ==> default: -- Video VRAM: 9216 00:01:22.902 ==> default: -- Sound Type: 00:01:22.902 ==> default: -- Keymap: en-us 00:01:22.902 ==> default: -- TPM Path: 00:01:22.902 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:22.902 ==> default: -- Command line args: 00:01:22.902 ==> default: -> value=-device, 00:01:22.902 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:22.902 ==> default: -> value=-drive, 00:01:22.902 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:22.902 ==> default: -> value=-device, 00:01:22.902 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.160 ==> default: Creating shared folders metadata... 00:01:23.160 ==> default: Starting domain. 00:01:25.063 ==> default: Waiting for domain to get an IP address... 00:01:37.294 ==> default: Waiting for SSH to become available... 00:01:37.294 ==> default: Configuring and enabling network interfaces... 00:01:41.481 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:46.745 ==> default: Mounting SSHFS shared folder... 00:01:48.153 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:01:48.153 ==> default: Checking Mount.. 00:01:48.718 ==> default: Folder Successfully Mounted! 00:01:48.718 ==> default: Running provisioner: file... 00:01:48.976 default: ~/.gitconfig => .gitconfig 00:01:49.235 00:01:49.235 SUCCESS! 00:01:49.235 00:01:49.235 cd to /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:01:49.235 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:49.235 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt" to destroy all trace of vm. 00:01:49.235 00:01:49.243 [Pipeline] } 00:01:49.257 [Pipeline] // stage 00:01:49.266 [Pipeline] dir 00:01:49.267 Running in /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt 00:01:49.268 [Pipeline] { 00:01:49.281 [Pipeline] catchError 00:01:49.284 [Pipeline] { 00:01:49.298 [Pipeline] sh 00:01:49.577 + vagrant ssh-config --host vagrant 00:01:49.577 + sed -ne /^Host/,$p 00:01:49.577 + tee ssh_conf 00:01:53.773 Host vagrant 00:01:53.773 HostName 192.168.121.136 00:01:53.773 User vagrant 00:01:53.773 Port 22 00:01:53.773 UserKnownHostsFile /dev/null 00:01:53.773 StrictHostKeyChecking no 00:01:53.773 PasswordAuthentication no 00:01:53.773 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:01:53.773 IdentitiesOnly yes 00:01:53.773 LogLevel FATAL 00:01:53.773 ForwardAgent yes 00:01:53.773 ForwardX11 yes 00:01:53.773 00:01:53.786 [Pipeline] withEnv 00:01:53.788 [Pipeline] { 00:01:53.801 [Pipeline] sh 00:01:54.081 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:54.082 source /etc/os-release 00:01:54.082 [[ -e /image.version ]] && img=$(< /image.version) 00:01:54.082 # Minimal, systemd-like check. 00:01:54.082 if [[ -e /.dockerenv ]]; then 00:01:54.082 # Clear garbage from the node's name: 00:01:54.082 # agt-er_autotest_547-896 -> autotest_547-896 00:01:54.082 # $HOSTNAME is the actual container id 00:01:54.082 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:54.082 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:54.082 # We can assume this is a mount from a host where container is running, 00:01:54.082 # so fetch its hostname to easily identify the target swarm worker. 00:01:54.082 container="$(< /etc/hostname) ($agent)" 00:01:54.082 else 00:01:54.082 # Fallback 00:01:54.082 container=$agent 00:01:54.082 fi 00:01:54.082 fi 00:01:54.082 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:54.082 00:01:54.350 [Pipeline] } 00:01:54.366 [Pipeline] // withEnv 00:01:54.373 [Pipeline] setCustomBuildProperty 00:01:54.386 [Pipeline] stage 00:01:54.388 [Pipeline] { (Tests) 00:01:54.404 [Pipeline] sh 00:01:54.681 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:55.013 [Pipeline] sh 00:01:55.292 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:55.564 [Pipeline] timeout 00:01:55.564 Timeout set to expire in 1 hr 30 min 00:01:55.566 [Pipeline] { 00:01:55.579 [Pipeline] sh 00:01:55.857 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:56.423 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:56.434 [Pipeline] sh 00:01:56.712 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:56.983 [Pipeline] sh 00:01:57.277 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:57.552 [Pipeline] sh 00:01:57.833 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo 00:01:58.091 ++ readlink -f spdk_repo 00:01:58.091 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:58.091 + [[ -n /home/vagrant/spdk_repo ]] 00:01:58.091 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:58.091 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:58.091 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:58.091 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:58.091 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:58.091 + [[ ubuntu22-vg-autotest == pkgdep-* ]] 00:01:58.091 + cd /home/vagrant/spdk_repo 00:01:58.091 + source /etc/os-release 00:01:58.091 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:01:58.091 ++ NAME=Ubuntu 00:01:58.091 ++ VERSION_ID=22.04 00:01:58.091 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:01:58.091 ++ VERSION_CODENAME=jammy 00:01:58.091 ++ ID=ubuntu 00:01:58.091 ++ ID_LIKE=debian 00:01:58.091 ++ HOME_URL=https://www.ubuntu.com/ 00:01:58.091 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:58.091 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:58.091 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:58.091 ++ UBUNTU_CODENAME=jammy 00:01:58.091 + uname -a 00:01:58.091 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:58.091 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:58.091 Hugepages 00:01:58.091 node hugesize free / total 00:01:58.091 node0 1048576kB 0 / 0 00:01:58.091 node0 2048kB 0 / 0 00:01:58.091 00:01:58.091 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:58.091 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:58.349 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:58.349 + rm -f /tmp/spdk-ld-path 00:01:58.349 + source autorun-spdk.conf 00:01:58.349 ++ SPDK_TEST_UNITTEST=1 00:01:58.349 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:58.349 ++ SPDK_TEST_NVME=1 00:01:58.349 ++ SPDK_TEST_BLOCKDEV=1 00:01:58.349 ++ SPDK_RUN_ASAN=1 00:01:58.349 ++ SPDK_RUN_UBSAN=1 00:01:58.349 ++ SPDK_TEST_RAID5=1 00:01:58.349 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:58.349 ++ RUN_NIGHTLY=1 00:01:58.349 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:58.349 + [[ -n '' ]] 00:01:58.349 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:58.349 + for M in /var/spdk/build-*-manifest.txt 00:01:58.350 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:58.350 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:58.350 + for M in /var/spdk/build-*-manifest.txt 00:01:58.350 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:58.350 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:58.350 ++ uname 00:01:58.350 + [[ Linux == \L\i\n\u\x ]] 00:01:58.350 + sudo dmesg -T 00:01:58.350 + sudo dmesg --clear 00:01:58.350 + dmesg_pid=2132 00:01:58.350 + [[ Ubuntu == FreeBSD ]] 00:01:58.350 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:58.350 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:58.350 + sudo dmesg -Tw 00:01:58.350 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:58.350 + [[ -x /usr/src/fio-static/fio ]] 00:01:58.350 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:58.350 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:58.350 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:58.350 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:58.350 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:58.350 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:58.350 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:58.350 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:58.350 Test configuration: 00:01:58.350 SPDK_TEST_UNITTEST=1 00:01:58.350 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:58.350 SPDK_TEST_NVME=1 00:01:58.350 SPDK_TEST_BLOCKDEV=1 00:01:58.350 SPDK_RUN_ASAN=1 00:01:58.350 SPDK_RUN_UBSAN=1 00:01:58.350 SPDK_TEST_RAID5=1 00:01:58.350 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:58.350 RUN_NIGHTLY=1 16:40:46 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:58.350 16:40:46 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:58.350 16:40:46 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:58.350 16:40:46 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:58.350 16:40:46 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:58.350 16:40:46 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:58.350 16:40:46 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:58.350 16:40:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:58.350 16:40:46 -- paths/export.sh@5 -- $ export PATH 00:01:58.350 16:40:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:58.350 16:40:46 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:58.350 16:40:46 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:58.350 16:40:46 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1730824846.XXXXXX 00:01:58.350 16:40:46 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1730824846.L2uA7v 00:01:58.350 16:40:46 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:58.350 16:40:46 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:58.350 16:40:46 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:58.350 16:40:46 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:58.350 16:40:46 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:58.350 16:40:46 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:58.350 16:40:46 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:58.350 16:40:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.350 16:40:46 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:01:58.350 16:40:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:58.350 16:40:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:58.350 16:40:46 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:58.350 16:40:46 -- spdk/autobuild.sh@16 -- $ date -u 00:01:58.350 Tue Nov 5 16:40:46 UTC 2024 00:01:58.350 16:40:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:58.350 LTS-67-gc13c99a5e 00:01:58.350 16:40:46 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:58.350 16:40:46 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:58.350 16:40:46 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:58.350 16:40:46 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:58.350 16:40:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.608 ************************************ 00:01:58.608 START TEST asan 00:01:58.608 ************************************ 00:01:58.608 using asan 00:01:58.608 16:40:46 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:01:58.608 00:01:58.608 real 0m0.001s 00:01:58.608 user 0m0.000s 00:01:58.608 sys 0m0.000s 00:01:58.608 16:40:46 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:58.608 16:40:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.608 ************************************ 00:01:58.608 END TEST asan 00:01:58.608 ************************************ 00:01:58.608 16:40:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:58.608 16:40:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:58.608 16:40:46 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:58.608 16:40:46 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:58.608 16:40:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.608 ************************************ 00:01:58.608 START TEST ubsan 00:01:58.608 ************************************ 00:01:58.608 using ubsan 00:01:58.608 16:40:46 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:58.608 00:01:58.608 real 0m0.000s 00:01:58.608 user 0m0.000s 00:01:58.608 sys 0m0.000s 00:01:58.608 16:40:46 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:58.608 16:40:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.608 ************************************ 00:01:58.608 END TEST ubsan 00:01:58.608 ************************************ 00:01:58.608 16:40:47 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:58.608 16:40:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:58.608 16:40:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:58.608 16:40:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:58.608 16:40:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:58.608 16:40:47 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:58.608 16:40:47 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:58.608 16:40:47 -- common/autobuild_common.sh@416 -- $ run_test unittest_build _unittest_build 00:01:58.608 16:40:47 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:01:58.608 16:40:47 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:58.608 16:40:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.608 ************************************ 00:01:58.608 START TEST unittest_build 00:01:58.608 ************************************ 00:01:58.608 16:40:47 -- common/autotest_common.sh@1114 -- $ _unittest_build 00:01:58.608 16:40:47 -- common/autobuild_common.sh@407 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:01:58.608 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:58.608 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:59.175 Using 'verbs' RDMA provider 00:02:14.612 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:26.860 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:26.860 Creating mk/config.mk...done. 00:02:26.860 Creating mk/cc.flags.mk...done. 00:02:26.860 Type 'make' to build. 00:02:26.860 16:41:14 -- common/autobuild_common.sh@408 -- $ make -j10 00:02:26.860 make[1]: Nothing to be done for 'all'. 00:02:41.749 The Meson build system 00:02:41.749 Version: 1.4.0 00:02:41.749 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:41.749 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:41.749 Build type: native build 00:02:41.749 Program cat found: YES (/usr/bin/cat) 00:02:41.749 Project name: DPDK 00:02:41.749 Project version: 23.11.0 00:02:41.749 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:02:41.749 C linker for the host machine: cc ld.bfd 2.38 00:02:41.749 Host machine cpu family: x86_64 00:02:41.749 Host machine cpu: x86_64 00:02:41.749 Message: ## Building in Developer Mode ## 00:02:41.749 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:41.749 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:41.749 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:41.749 Program python3 found: YES (/usr/bin/python3) 00:02:41.749 Program cat found: YES (/usr/bin/cat) 00:02:41.749 Compiler for C supports arguments -march=native: YES 00:02:41.749 Checking for size of "void *" : 8 00:02:41.749 Checking for size of "void *" : 8 (cached) 00:02:41.749 Library m found: YES 00:02:41.749 Library numa found: YES 00:02:41.749 Has header "numaif.h" : YES 00:02:41.749 Library fdt found: NO 00:02:41.749 Library execinfo found: NO 00:02:41.749 Has header "execinfo.h" : YES 00:02:41.749 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:02:41.749 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:41.749 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:41.749 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:41.749 Run-time dependency openssl found: YES 3.0.2 00:02:41.749 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:41.749 Library pcap found: NO 00:02:41.749 Compiler for C supports arguments -Wcast-qual: YES 00:02:41.749 Compiler for C supports arguments -Wdeprecated: YES 00:02:41.749 Compiler for C supports arguments -Wformat: YES 00:02:41.749 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:41.749 Compiler for C supports arguments -Wformat-security: YES 00:02:41.749 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:41.749 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:41.749 Compiler for C supports arguments -Wnested-externs: YES 00:02:41.749 Compiler for C supports arguments -Wold-style-definition: YES 00:02:41.749 Compiler for C supports arguments -Wpointer-arith: YES 00:02:41.749 Compiler for C supports arguments -Wsign-compare: YES 00:02:41.749 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:41.749 Compiler for C supports arguments -Wundef: YES 00:02:41.749 Compiler for C supports arguments -Wwrite-strings: YES 00:02:41.749 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:41.749 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:41.749 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:41.749 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:41.749 Program objdump found: YES (/usr/bin/objdump) 00:02:41.749 Compiler for C supports arguments -mavx512f: YES 00:02:41.749 Checking if "AVX512 checking" compiles: YES 00:02:41.749 Fetching value of define "__SSE4_2__" : 1 00:02:41.749 Fetching value of define "__AES__" : 1 00:02:41.749 Fetching value of define "__AVX__" : 1 00:02:41.749 Fetching value of define "__AVX2__" : 1 00:02:41.749 Fetching value of define "__AVX512BW__" : (undefined) 00:02:41.749 Fetching value of define "__AVX512CD__" : (undefined) 00:02:41.749 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:41.749 Fetching value of define "__AVX512F__" : (undefined) 00:02:41.749 Fetching value of define "__AVX512VL__" : (undefined) 00:02:41.749 Fetching value of define "__PCLMUL__" : 1 00:02:41.749 Fetching value of define "__RDRND__" : 1 00:02:41.749 Fetching value of define "__RDSEED__" : 1 00:02:41.749 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:41.749 Fetching value of define "__znver1__" : (undefined) 00:02:41.749 Fetching value of define "__znver2__" : (undefined) 00:02:41.749 Fetching value of define "__znver3__" : (undefined) 00:02:41.749 Fetching value of define "__znver4__" : (undefined) 00:02:41.749 Library asan found: YES 00:02:41.749 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:41.749 Message: lib/log: Defining dependency "log" 00:02:41.749 Message: lib/kvargs: Defining dependency "kvargs" 00:02:41.749 Message: lib/telemetry: Defining dependency "telemetry" 00:02:41.749 Library rt found: YES 00:02:41.749 Checking for function "getentropy" : NO 00:02:41.749 Message: lib/eal: Defining dependency "eal" 00:02:41.749 Message: lib/ring: Defining dependency "ring" 00:02:41.749 Message: lib/rcu: Defining dependency "rcu" 00:02:41.749 Message: lib/mempool: Defining dependency "mempool" 00:02:41.749 Message: lib/mbuf: Defining dependency "mbuf" 00:02:41.749 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:41.749 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:41.749 Compiler for C supports arguments -mpclmul: YES 00:02:41.749 Compiler for C supports arguments -maes: YES 00:02:41.749 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:41.749 Compiler for C supports arguments -mavx512bw: YES 00:02:41.749 Compiler for C supports arguments -mavx512dq: YES 00:02:41.749 Compiler for C supports arguments -mavx512vl: YES 00:02:41.749 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:41.749 Compiler for C supports arguments -mavx2: YES 00:02:41.749 Compiler for C supports arguments -mavx: YES 00:02:41.749 Message: lib/net: Defining dependency "net" 00:02:41.749 Message: lib/meter: Defining dependency "meter" 00:02:41.749 Message: lib/ethdev: Defining dependency "ethdev" 00:02:41.749 Message: lib/pci: Defining dependency "pci" 00:02:41.749 Message: lib/cmdline: Defining dependency "cmdline" 00:02:41.749 Message: lib/hash: Defining dependency "hash" 00:02:41.749 Message: lib/timer: Defining dependency "timer" 00:02:41.749 Message: lib/compressdev: Defining dependency "compressdev" 00:02:41.749 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:41.749 Message: lib/dmadev: Defining dependency "dmadev" 00:02:41.749 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:41.749 Message: lib/power: Defining dependency "power" 00:02:41.749 Message: lib/reorder: Defining dependency "reorder" 00:02:41.749 Message: lib/security: Defining dependency "security" 00:02:41.749 Has header "linux/userfaultfd.h" : YES 00:02:41.750 Has header "linux/vduse.h" : YES 00:02:41.750 Message: lib/vhost: Defining dependency "vhost" 00:02:41.750 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:41.750 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:41.750 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:41.750 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:41.750 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:41.750 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:41.750 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:41.750 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:41.750 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:41.750 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:41.750 Program doxygen found: YES (/usr/bin/doxygen) 00:02:41.750 Configuring doxy-api-html.conf using configuration 00:02:41.750 Configuring doxy-api-man.conf using configuration 00:02:41.750 Program mandb found: YES (/usr/bin/mandb) 00:02:41.750 Program sphinx-build found: NO 00:02:41.750 Configuring rte_build_config.h using configuration 00:02:41.750 Message: 00:02:41.750 ================= 00:02:41.750 Applications Enabled 00:02:41.750 ================= 00:02:41.750 00:02:41.750 apps: 00:02:41.750 00:02:41.750 00:02:41.750 Message: 00:02:41.750 ================= 00:02:41.750 Libraries Enabled 00:02:41.750 ================= 00:02:41.750 00:02:41.750 libs: 00:02:41.750 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:41.750 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:41.750 cryptodev, dmadev, power, reorder, security, vhost, 00:02:41.750 00:02:41.750 Message: 00:02:41.750 =============== 00:02:41.750 Drivers Enabled 00:02:41.750 =============== 00:02:41.750 00:02:41.750 common: 00:02:41.750 00:02:41.750 bus: 00:02:41.750 pci, vdev, 00:02:41.750 mempool: 00:02:41.750 ring, 00:02:41.750 dma: 00:02:41.750 00:02:41.750 net: 00:02:41.750 00:02:41.750 crypto: 00:02:41.750 00:02:41.750 compress: 00:02:41.750 00:02:41.750 vdpa: 00:02:41.750 00:02:41.750 00:02:41.750 Message: 00:02:41.750 ================= 00:02:41.750 Content Skipped 00:02:41.750 ================= 00:02:41.750 00:02:41.750 apps: 00:02:41.750 dumpcap: explicitly disabled via build config 00:02:41.750 graph: explicitly disabled via build config 00:02:41.750 pdump: explicitly disabled via build config 00:02:41.750 proc-info: explicitly disabled via build config 00:02:41.750 test-acl: explicitly disabled via build config 00:02:41.750 test-bbdev: explicitly disabled via build config 00:02:41.750 test-cmdline: explicitly disabled via build config 00:02:41.750 test-compress-perf: explicitly disabled via build config 00:02:41.750 test-crypto-perf: explicitly disabled via build config 00:02:41.750 test-dma-perf: explicitly disabled via build config 00:02:41.750 test-eventdev: explicitly disabled via build config 00:02:41.750 test-fib: explicitly disabled via build config 00:02:41.750 test-flow-perf: explicitly disabled via build config 00:02:41.750 test-gpudev: explicitly disabled via build config 00:02:41.750 test-mldev: explicitly disabled via build config 00:02:41.750 test-pipeline: explicitly disabled via build config 00:02:41.750 test-pmd: explicitly disabled via build config 00:02:41.750 test-regex: explicitly disabled via build config 00:02:41.750 test-sad: explicitly disabled via build config 00:02:41.750 test-security-perf: explicitly disabled via build config 00:02:41.750 00:02:41.750 libs: 00:02:41.750 metrics: explicitly disabled via build config 00:02:41.750 acl: explicitly disabled via build config 00:02:41.750 bbdev: explicitly disabled via build config 00:02:41.750 bitratestats: explicitly disabled via build config 00:02:41.750 bpf: explicitly disabled via build config 00:02:41.750 cfgfile: explicitly disabled via build config 00:02:41.750 distributor: explicitly disabled via build config 00:02:41.750 efd: explicitly disabled via build config 00:02:41.750 eventdev: explicitly disabled via build config 00:02:41.750 dispatcher: explicitly disabled via build config 00:02:41.750 gpudev: explicitly disabled via build config 00:02:41.750 gro: explicitly disabled via build config 00:02:41.750 gso: explicitly disabled via build config 00:02:41.750 ip_frag: explicitly disabled via build config 00:02:41.750 jobstats: explicitly disabled via build config 00:02:41.750 latencystats: explicitly disabled via build config 00:02:41.750 lpm: explicitly disabled via build config 00:02:41.750 member: explicitly disabled via build config 00:02:41.750 pcapng: explicitly disabled via build config 00:02:41.750 rawdev: explicitly disabled via build config 00:02:41.750 regexdev: explicitly disabled via build config 00:02:41.750 mldev: explicitly disabled via build config 00:02:41.750 rib: explicitly disabled via build config 00:02:41.750 sched: explicitly disabled via build config 00:02:41.750 stack: explicitly disabled via build config 00:02:41.750 ipsec: explicitly disabled via build config 00:02:41.750 pdcp: explicitly disabled via build config 00:02:41.750 fib: explicitly disabled via build config 00:02:41.750 port: explicitly disabled via build config 00:02:41.750 pdump: explicitly disabled via build config 00:02:41.750 table: explicitly disabled via build config 00:02:41.750 pipeline: explicitly disabled via build config 00:02:41.750 graph: explicitly disabled via build config 00:02:41.750 node: explicitly disabled via build config 00:02:41.750 00:02:41.750 drivers: 00:02:41.750 common/cpt: not in enabled drivers build config 00:02:41.750 common/dpaax: not in enabled drivers build config 00:02:41.750 common/iavf: not in enabled drivers build config 00:02:41.750 common/idpf: not in enabled drivers build config 00:02:41.750 common/mvep: not in enabled drivers build config 00:02:41.750 common/octeontx: not in enabled drivers build config 00:02:41.750 bus/auxiliary: not in enabled drivers build config 00:02:41.750 bus/cdx: not in enabled drivers build config 00:02:41.750 bus/dpaa: not in enabled drivers build config 00:02:41.750 bus/fslmc: not in enabled drivers build config 00:02:41.750 bus/ifpga: not in enabled drivers build config 00:02:41.750 bus/platform: not in enabled drivers build config 00:02:41.750 bus/vmbus: not in enabled drivers build config 00:02:41.750 common/cnxk: not in enabled drivers build config 00:02:41.750 common/mlx5: not in enabled drivers build config 00:02:41.750 common/nfp: not in enabled drivers build config 00:02:41.750 common/qat: not in enabled drivers build config 00:02:41.750 common/sfc_efx: not in enabled drivers build config 00:02:41.750 mempool/bucket: not in enabled drivers build config 00:02:41.750 mempool/cnxk: not in enabled drivers build config 00:02:41.750 mempool/dpaa: not in enabled drivers build config 00:02:41.750 mempool/dpaa2: not in enabled drivers build config 00:02:41.750 mempool/octeontx: not in enabled drivers build config 00:02:41.750 mempool/stack: not in enabled drivers build config 00:02:41.750 dma/cnxk: not in enabled drivers build config 00:02:41.750 dma/dpaa: not in enabled drivers build config 00:02:41.750 dma/dpaa2: not in enabled drivers build config 00:02:41.750 dma/hisilicon: not in enabled drivers build config 00:02:41.750 dma/idxd: not in enabled drivers build config 00:02:41.750 dma/ioat: not in enabled drivers build config 00:02:41.750 dma/skeleton: not in enabled drivers build config 00:02:41.750 net/af_packet: not in enabled drivers build config 00:02:41.750 net/af_xdp: not in enabled drivers build config 00:02:41.750 net/ark: not in enabled drivers build config 00:02:41.750 net/atlantic: not in enabled drivers build config 00:02:41.750 net/avp: not in enabled drivers build config 00:02:41.750 net/axgbe: not in enabled drivers build config 00:02:41.750 net/bnx2x: not in enabled drivers build config 00:02:41.750 net/bnxt: not in enabled drivers build config 00:02:41.750 net/bonding: not in enabled drivers build config 00:02:41.750 net/cnxk: not in enabled drivers build config 00:02:41.750 net/cpfl: not in enabled drivers build config 00:02:41.750 net/cxgbe: not in enabled drivers build config 00:02:41.750 net/dpaa: not in enabled drivers build config 00:02:41.750 net/dpaa2: not in enabled drivers build config 00:02:41.750 net/e1000: not in enabled drivers build config 00:02:41.750 net/ena: not in enabled drivers build config 00:02:41.750 net/enetc: not in enabled drivers build config 00:02:41.750 net/enetfec: not in enabled drivers build config 00:02:41.750 net/enic: not in enabled drivers build config 00:02:41.750 net/failsafe: not in enabled drivers build config 00:02:41.750 net/fm10k: not in enabled drivers build config 00:02:41.750 net/gve: not in enabled drivers build config 00:02:41.750 net/hinic: not in enabled drivers build config 00:02:41.750 net/hns3: not in enabled drivers build config 00:02:41.750 net/i40e: not in enabled drivers build config 00:02:41.750 net/iavf: not in enabled drivers build config 00:02:41.750 net/ice: not in enabled drivers build config 00:02:41.750 net/idpf: not in enabled drivers build config 00:02:41.750 net/igc: not in enabled drivers build config 00:02:41.750 net/ionic: not in enabled drivers build config 00:02:41.750 net/ipn3ke: not in enabled drivers build config 00:02:41.750 net/ixgbe: not in enabled drivers build config 00:02:41.750 net/mana: not in enabled drivers build config 00:02:41.750 net/memif: not in enabled drivers build config 00:02:41.750 net/mlx4: not in enabled drivers build config 00:02:41.750 net/mlx5: not in enabled drivers build config 00:02:41.750 net/mvneta: not in enabled drivers build config 00:02:41.750 net/mvpp2: not in enabled drivers build config 00:02:41.750 net/netvsc: not in enabled drivers build config 00:02:41.750 net/nfb: not in enabled drivers build config 00:02:41.750 net/nfp: not in enabled drivers build config 00:02:41.750 net/ngbe: not in enabled drivers build config 00:02:41.750 net/null: not in enabled drivers build config 00:02:41.750 net/octeontx: not in enabled drivers build config 00:02:41.750 net/octeon_ep: not in enabled drivers build config 00:02:41.750 net/pcap: not in enabled drivers build config 00:02:41.750 net/pfe: not in enabled drivers build config 00:02:41.750 net/qede: not in enabled drivers build config 00:02:41.750 net/ring: not in enabled drivers build config 00:02:41.750 net/sfc: not in enabled drivers build config 00:02:41.750 net/softnic: not in enabled drivers build config 00:02:41.750 net/tap: not in enabled drivers build config 00:02:41.750 net/thunderx: not in enabled drivers build config 00:02:41.750 net/txgbe: not in enabled drivers build config 00:02:41.750 net/vdev_netvsc: not in enabled drivers build config 00:02:41.751 net/vhost: not in enabled drivers build config 00:02:41.751 net/virtio: not in enabled drivers build config 00:02:41.751 net/vmxnet3: not in enabled drivers build config 00:02:41.751 raw/*: missing internal dependency, "rawdev" 00:02:41.751 crypto/armv8: not in enabled drivers build config 00:02:41.751 crypto/bcmfs: not in enabled drivers build config 00:02:41.751 crypto/caam_jr: not in enabled drivers build config 00:02:41.751 crypto/ccp: not in enabled drivers build config 00:02:41.751 crypto/cnxk: not in enabled drivers build config 00:02:41.751 crypto/dpaa_sec: not in enabled drivers build config 00:02:41.751 crypto/dpaa2_sec: not in enabled drivers build config 00:02:41.751 crypto/ipsec_mb: not in enabled drivers build config 00:02:41.751 crypto/mlx5: not in enabled drivers build config 00:02:41.751 crypto/mvsam: not in enabled drivers build config 00:02:41.751 crypto/nitrox: not in enabled drivers build config 00:02:41.751 crypto/null: not in enabled drivers build config 00:02:41.751 crypto/octeontx: not in enabled drivers build config 00:02:41.751 crypto/openssl: not in enabled drivers build config 00:02:41.751 crypto/scheduler: not in enabled drivers build config 00:02:41.751 crypto/uadk: not in enabled drivers build config 00:02:41.751 crypto/virtio: not in enabled drivers build config 00:02:41.751 compress/isal: not in enabled drivers build config 00:02:41.751 compress/mlx5: not in enabled drivers build config 00:02:41.751 compress/octeontx: not in enabled drivers build config 00:02:41.751 compress/zlib: not in enabled drivers build config 00:02:41.751 regex/*: missing internal dependency, "regexdev" 00:02:41.751 ml/*: missing internal dependency, "mldev" 00:02:41.751 vdpa/ifc: not in enabled drivers build config 00:02:41.751 vdpa/mlx5: not in enabled drivers build config 00:02:41.751 vdpa/nfp: not in enabled drivers build config 00:02:41.751 vdpa/sfc: not in enabled drivers build config 00:02:41.751 event/*: missing internal dependency, "eventdev" 00:02:41.751 baseband/*: missing internal dependency, "bbdev" 00:02:41.751 gpu/*: missing internal dependency, "gpudev" 00:02:41.751 00:02:41.751 00:02:41.751 Build targets in project: 85 00:02:41.751 00:02:41.751 DPDK 23.11.0 00:02:41.751 00:02:41.751 User defined options 00:02:41.751 buildtype : debug 00:02:41.751 default_library : static 00:02:41.751 libdir : lib 00:02:41.751 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:41.751 b_sanitize : address 00:02:41.751 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon 00:02:41.751 c_link_args : 00:02:41.751 cpu_instruction_set: native 00:02:41.751 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:02:41.751 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:02:41.751 enable_docs : false 00:02:41.751 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:41.751 enable_kmods : false 00:02:41.751 tests : false 00:02:41.751 00:02:41.751 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:41.751 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:41.751 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:41.751 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:41.751 [3/265] Linking static target lib/librte_kvargs.a 00:02:41.751 [4/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:41.751 [5/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:41.751 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:41.751 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:41.751 [8/265] Linking static target lib/librte_log.a 00:02:41.751 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:41.751 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:41.751 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:41.751 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:41.751 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:41.751 [14/265] Linking static target lib/librte_telemetry.a 00:02:41.751 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:41.751 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:41.751 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:41.751 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:41.751 [19/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.751 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:41.751 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:41.751 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:41.751 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:41.751 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:41.751 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:41.751 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:41.751 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:41.751 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:41.751 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:42.009 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:42.009 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:42.009 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:42.009 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:42.009 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:42.009 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:42.009 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:42.267 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:42.267 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:42.267 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:42.267 [40/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.267 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:42.267 [42/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.525 [43/265] Linking target lib/librte_log.so.24.0 00:02:42.525 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:42.525 [45/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:42.525 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:42.525 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:42.525 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:42.525 [49/265] Linking target lib/librte_kvargs.so.24.0 00:02:42.525 [50/265] Linking target lib/librte_telemetry.so.24.0 00:02:42.525 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:42.783 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:42.783 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:42.783 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:42.783 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:42.783 [56/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:42.783 [57/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:42.783 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:43.041 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:43.041 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:43.041 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:43.041 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:43.041 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:43.041 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:43.299 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:43.299 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:43.299 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:43.299 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:43.299 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:43.299 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:43.299 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:43.557 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:43.557 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:43.557 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:43.557 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:43.557 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:43.557 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:43.816 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:43.816 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:43.816 [80/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:43.816 [81/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:43.816 [82/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:44.074 [83/265] Linking static target lib/librte_ring.a 00:02:44.074 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:44.074 [85/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:44.331 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:44.331 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.331 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:44.331 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:44.331 [90/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:44.331 [91/265] Linking static target lib/librte_mempool.a 00:02:44.331 [92/265] Linking static target lib/librte_rcu.a 00:02:44.331 [93/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:44.331 [94/265] Linking static target lib/librte_eal.a 00:02:44.589 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:44.589 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:44.589 [97/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:44.589 [98/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:44.589 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:44.847 [100/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.847 [101/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.847 [102/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:45.105 [103/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:45.105 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:45.105 [105/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:45.105 [106/265] Linking static target lib/librte_mbuf.a 00:02:45.105 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:45.105 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:45.105 [109/265] Linking static target lib/librte_net.a 00:02:45.105 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:45.105 [111/265] Linking static target lib/librte_meter.a 00:02:45.364 [112/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.364 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:45.364 [114/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.364 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:45.364 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:45.364 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:45.637 [118/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.910 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:45.910 [120/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.910 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:45.910 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:45.910 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:45.910 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:45.910 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:46.169 [126/265] Linking static target lib/librte_pci.a 00:02:46.169 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:46.169 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:46.169 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:46.169 [130/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.169 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:46.169 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:46.169 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:46.427 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:46.427 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:46.427 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:46.427 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:46.427 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:46.427 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:46.427 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:46.427 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:46.427 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:46.685 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:46.685 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:46.685 [145/265] Linking static target lib/librte_cmdline.a 00:02:46.685 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:46.943 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:46.943 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:46.943 [149/265] Linking static target lib/librte_timer.a 00:02:46.943 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:46.943 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:46.943 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:47.202 [153/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:47.202 [154/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.202 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:47.202 [156/265] Linking static target lib/librte_compressdev.a 00:02:47.202 [157/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:47.202 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:47.461 [159/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:47.461 [160/265] Linking static target lib/librte_hash.a 00:02:47.461 [161/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:47.461 [162/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:47.461 [163/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.461 [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:47.461 [165/265] Linking static target lib/librte_dmadev.a 00:02:47.461 [166/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:47.718 [167/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:47.718 [168/265] Linking static target lib/librte_ethdev.a 00:02:47.718 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:47.718 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:47.718 [171/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.976 [172/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.976 [173/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:47.976 [174/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:47.976 [175/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.976 [176/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:47.976 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:47.976 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:48.235 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:48.235 [180/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:48.235 [181/265] Linking static target lib/librte_cryptodev.a 00:02:48.235 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:48.235 [183/265] Linking static target lib/librte_power.a 00:02:48.493 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:48.493 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:48.493 [186/265] Linking static target lib/librte_reorder.a 00:02:48.493 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:48.493 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:48.493 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:48.493 [190/265] Linking static target lib/librte_security.a 00:02:48.752 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.010 [192/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.010 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:49.010 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.010 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:49.268 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:49.268 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:49.526 [198/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:49.526 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:49.526 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:49.526 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:49.526 [202/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.784 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:49.784 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:49.784 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:49.784 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:49.784 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:49.784 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:50.044 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:50.044 [210/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.044 [211/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.044 [212/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:50.044 [213/265] Linking static target drivers/librte_bus_vdev.a 00:02:50.044 [214/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.044 [215/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.044 [216/265] Linking static target drivers/librte_bus_pci.a 00:02:50.302 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:50.302 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:50.302 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.560 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:50.560 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.560 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.560 [223/265] Linking static target drivers/librte_mempool_ring.a 00:02:50.560 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.464 [225/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.464 [226/265] Linking target lib/librte_eal.so.24.0 00:02:52.464 [227/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:52.464 [228/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:52.464 [229/265] Linking target lib/librte_ring.so.24.0 00:02:52.464 [230/265] Linking target lib/librte_pci.so.24.0 00:02:52.464 [231/265] Linking target lib/librte_meter.so.24.0 00:02:52.464 [232/265] Linking target lib/librte_dmadev.so.24.0 00:02:52.464 [233/265] Linking target lib/librte_timer.so.24.0 00:02:52.464 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:52.464 [235/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:52.464 [236/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:52.464 [237/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:52.464 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:52.464 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:52.464 [240/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:52.721 [241/265] Linking target lib/librte_rcu.so.24.0 00:02:52.721 [242/265] Linking target lib/librte_mempool.so.24.0 00:02:52.722 [243/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:52.722 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:52.722 [245/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:52.722 [246/265] Linking target lib/librte_mbuf.so.24.0 00:02:52.979 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:52.979 [248/265] Linking target lib/librte_reorder.so.24.0 00:02:52.979 [249/265] Linking target lib/librte_compressdev.so.24.0 00:02:52.979 [250/265] Linking target lib/librte_net.so.24.0 00:02:52.979 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:02:53.238 [252/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:53.238 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:53.238 [254/265] Linking target lib/librte_hash.so.24.0 00:02:53.238 [255/265] Linking target lib/librte_security.so.24.0 00:02:53.238 [256/265] Linking target lib/librte_cmdline.so.24.0 00:02:53.238 [257/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.238 [258/265] Linking target lib/librte_ethdev.so.24.0 00:02:53.238 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:53.496 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:53.496 [261/265] Linking target lib/librte_power.so.24.0 00:02:55.398 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:55.398 [263/265] Linking static target lib/librte_vhost.a 00:02:57.297 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.297 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:57.297 INFO: autodetecting backend as ninja 00:02:57.297 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:57.864 CC lib/ut_mock/mock.o 00:02:57.864 CC lib/ut/ut.o 00:02:57.864 CC lib/log/log_flags.o 00:02:57.864 CC lib/log/log.o 00:02:57.864 CC lib/log/log_deprecated.o 00:02:58.122 LIB libspdk_ut_mock.a 00:02:58.122 LIB libspdk_ut.a 00:02:58.122 LIB libspdk_log.a 00:02:58.381 CXX lib/trace_parser/trace.o 00:02:58.381 CC lib/dma/dma.o 00:02:58.381 CC lib/ioat/ioat.o 00:02:58.381 CC lib/util/base64.o 00:02:58.381 CC lib/util/bit_array.o 00:02:58.381 CC lib/util/crc16.o 00:02:58.381 CC lib/util/cpuset.o 00:02:58.381 CC lib/util/crc32.o 00:02:58.381 CC lib/util/crc32c.o 00:02:58.381 CC lib/vfio_user/host/vfio_user_pci.o 00:02:58.381 CC lib/util/crc32_ieee.o 00:02:58.381 CC lib/util/crc64.o 00:02:58.381 CC lib/util/dif.o 00:02:58.381 CC lib/util/fd.o 00:02:58.639 LIB libspdk_dma.a 00:02:58.639 CC lib/vfio_user/host/vfio_user.o 00:02:58.639 CC lib/util/file.o 00:02:58.639 CC lib/util/hexlify.o 00:02:58.639 CC lib/util/iov.o 00:02:58.639 CC lib/util/math.o 00:02:58.639 CC lib/util/pipe.o 00:02:58.639 LIB libspdk_ioat.a 00:02:58.639 CC lib/util/strerror_tls.o 00:02:58.639 CC lib/util/string.o 00:02:58.639 CC lib/util/uuid.o 00:02:58.639 CC lib/util/fd_group.o 00:02:58.897 CC lib/util/xor.o 00:02:58.897 LIB libspdk_vfio_user.a 00:02:58.897 CC lib/util/zipf.o 00:02:59.155 LIB libspdk_util.a 00:02:59.412 CC lib/rdma/common.o 00:02:59.412 CC lib/conf/conf.o 00:02:59.412 CC lib/rdma/rdma_verbs.o 00:02:59.412 CC lib/vmd/led.o 00:02:59.412 CC lib/vmd/vmd.o 00:02:59.412 CC lib/env_dpdk/env.o 00:02:59.412 CC lib/env_dpdk/memory.o 00:02:59.412 CC lib/json/json_parse.o 00:02:59.412 CC lib/idxd/idxd.o 00:02:59.412 LIB libspdk_trace_parser.a 00:02:59.412 CC lib/idxd/idxd_user.o 00:02:59.412 CC lib/env_dpdk/pci.o 00:02:59.412 LIB libspdk_conf.a 00:02:59.669 CC lib/env_dpdk/init.o 00:02:59.669 CC lib/json/json_util.o 00:02:59.669 CC lib/json/json_write.o 00:02:59.669 LIB libspdk_rdma.a 00:02:59.669 CC lib/env_dpdk/threads.o 00:02:59.669 CC lib/env_dpdk/pci_ioat.o 00:02:59.927 CC lib/env_dpdk/pci_virtio.o 00:02:59.927 CC lib/env_dpdk/pci_vmd.o 00:02:59.927 CC lib/env_dpdk/pci_idxd.o 00:02:59.927 CC lib/env_dpdk/pci_event.o 00:02:59.927 LIB libspdk_json.a 00:02:59.927 CC lib/env_dpdk/sigbus_handler.o 00:02:59.927 CC lib/env_dpdk/pci_dpdk.o 00:02:59.927 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:59.927 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:59.927 LIB libspdk_idxd.a 00:02:59.927 CC lib/jsonrpc/jsonrpc_server.o 00:02:59.927 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:59.927 CC lib/jsonrpc/jsonrpc_client.o 00:02:59.927 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:00.184 LIB libspdk_vmd.a 00:03:00.441 LIB libspdk_jsonrpc.a 00:03:00.441 CC lib/rpc/rpc.o 00:03:00.698 LIB libspdk_rpc.a 00:03:00.974 CC lib/notify/notify.o 00:03:00.974 CC lib/notify/notify_rpc.o 00:03:00.974 CC lib/trace/trace.o 00:03:00.974 CC lib/trace/trace_rpc.o 00:03:00.974 CC lib/trace/trace_flags.o 00:03:00.974 CC lib/sock/sock.o 00:03:00.974 CC lib/sock/sock_rpc.o 00:03:00.974 LIB libspdk_env_dpdk.a 00:03:00.974 LIB libspdk_notify.a 00:03:00.974 LIB libspdk_trace.a 00:03:01.238 CC lib/thread/thread.o 00:03:01.238 CC lib/thread/iobuf.o 00:03:01.238 LIB libspdk_sock.a 00:03:01.496 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:01.496 CC lib/nvme/nvme_ctrlr.o 00:03:01.496 CC lib/nvme/nvme_fabric.o 00:03:01.496 CC lib/nvme/nvme_ns_cmd.o 00:03:01.496 CC lib/nvme/nvme_ns.o 00:03:01.496 CC lib/nvme/nvme_pcie_common.o 00:03:01.496 CC lib/nvme/nvme_qpair.o 00:03:01.496 CC lib/nvme/nvme_pcie.o 00:03:01.496 CC lib/nvme/nvme.o 00:03:02.062 CC lib/nvme/nvme_quirks.o 00:03:02.062 CC lib/nvme/nvme_transport.o 00:03:02.062 CC lib/nvme/nvme_discovery.o 00:03:02.062 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:02.320 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:02.320 CC lib/nvme/nvme_tcp.o 00:03:02.320 CC lib/nvme/nvme_opal.o 00:03:02.579 CC lib/nvme/nvme_io_msg.o 00:03:02.579 CC lib/nvme/nvme_poll_group.o 00:03:02.579 CC lib/nvme/nvme_zns.o 00:03:02.579 CC lib/nvme/nvme_cuse.o 00:03:02.579 CC lib/nvme/nvme_vfio_user.o 00:03:02.836 CC lib/nvme/nvme_rdma.o 00:03:02.836 LIB libspdk_thread.a 00:03:03.094 CC lib/accel/accel.o 00:03:03.094 CC lib/accel/accel_rpc.o 00:03:03.094 CC lib/blob/blobstore.o 00:03:03.094 CC lib/init/json_config.o 00:03:03.094 CC lib/virtio/virtio.o 00:03:03.094 CC lib/init/subsystem.o 00:03:03.352 CC lib/accel/accel_sw.o 00:03:03.352 CC lib/init/subsystem_rpc.o 00:03:03.352 CC lib/init/rpc.o 00:03:03.352 CC lib/virtio/virtio_vhost_user.o 00:03:03.352 CC lib/blob/request.o 00:03:03.352 LIB libspdk_init.a 00:03:03.610 CC lib/blob/zeroes.o 00:03:03.610 CC lib/virtio/virtio_vfio_user.o 00:03:03.610 CC lib/virtio/virtio_pci.o 00:03:03.610 CC lib/blob/blob_bs_dev.o 00:03:03.610 CC lib/event/app.o 00:03:03.610 CC lib/event/reactor.o 00:03:03.868 CC lib/event/log_rpc.o 00:03:03.868 CC lib/event/app_rpc.o 00:03:03.868 CC lib/event/scheduler_static.o 00:03:03.868 LIB libspdk_virtio.a 00:03:04.127 LIB libspdk_nvme.a 00:03:04.127 LIB libspdk_event.a 00:03:04.127 LIB libspdk_accel.a 00:03:04.384 CC lib/bdev/bdev.o 00:03:04.385 CC lib/bdev/bdev_rpc.o 00:03:04.385 CC lib/bdev/bdev_zone.o 00:03:04.385 CC lib/bdev/part.o 00:03:04.385 CC lib/bdev/scsi_nvme.o 00:03:06.913 LIB libspdk_blob.a 00:03:06.913 CC lib/lvol/lvol.o 00:03:06.913 CC lib/blobfs/blobfs.o 00:03:06.913 CC lib/blobfs/tree.o 00:03:07.505 LIB libspdk_bdev.a 00:03:07.505 CC lib/scsi/dev.o 00:03:07.505 CC lib/scsi/lun.o 00:03:07.505 CC lib/scsi/port.o 00:03:07.505 CC lib/nbd/nbd.o 00:03:07.505 CC lib/ftl/ftl_core.o 00:03:07.505 CC lib/scsi/scsi.o 00:03:07.505 CC lib/nbd/nbd_rpc.o 00:03:07.505 CC lib/nvmf/ctrlr.o 00:03:07.505 LIB libspdk_blobfs.a 00:03:07.763 CC lib/nvmf/ctrlr_discovery.o 00:03:07.763 LIB libspdk_lvol.a 00:03:07.763 CC lib/nvmf/ctrlr_bdev.o 00:03:07.763 CC lib/scsi/scsi_bdev.o 00:03:07.763 CC lib/ftl/ftl_init.o 00:03:07.763 CC lib/ftl/ftl_layout.o 00:03:08.020 CC lib/ftl/ftl_debug.o 00:03:08.021 CC lib/scsi/scsi_pr.o 00:03:08.021 CC lib/scsi/scsi_rpc.o 00:03:08.021 CC lib/ftl/ftl_io.o 00:03:08.021 LIB libspdk_nbd.a 00:03:08.021 CC lib/ftl/ftl_sb.o 00:03:08.278 CC lib/ftl/ftl_l2p.o 00:03:08.278 CC lib/ftl/ftl_l2p_flat.o 00:03:08.278 CC lib/scsi/task.o 00:03:08.278 CC lib/nvmf/subsystem.o 00:03:08.278 CC lib/nvmf/nvmf.o 00:03:08.278 CC lib/nvmf/nvmf_rpc.o 00:03:08.278 CC lib/nvmf/transport.o 00:03:08.278 CC lib/ftl/ftl_nv_cache.o 00:03:08.278 CC lib/ftl/ftl_band.o 00:03:08.278 CC lib/ftl/ftl_band_ops.o 00:03:08.278 LIB libspdk_scsi.a 00:03:08.536 CC lib/ftl/ftl_writer.o 00:03:08.536 CC lib/nvmf/tcp.o 00:03:08.794 CC lib/ftl/ftl_rq.o 00:03:08.794 CC lib/iscsi/conn.o 00:03:08.794 CC lib/vhost/vhost.o 00:03:08.794 CC lib/ftl/ftl_reloc.o 00:03:09.052 CC lib/ftl/ftl_l2p_cache.o 00:03:09.310 CC lib/nvmf/rdma.o 00:03:09.310 CC lib/vhost/vhost_rpc.o 00:03:09.310 CC lib/vhost/vhost_scsi.o 00:03:09.310 CC lib/vhost/vhost_blk.o 00:03:09.568 CC lib/vhost/rte_vhost_user.o 00:03:09.568 CC lib/ftl/ftl_p2l.o 00:03:09.568 CC lib/iscsi/init_grp.o 00:03:09.568 CC lib/iscsi/iscsi.o 00:03:09.568 CC lib/iscsi/md5.o 00:03:09.827 CC lib/iscsi/param.o 00:03:09.827 CC lib/ftl/mngt/ftl_mngt.o 00:03:09.827 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:09.827 CC lib/iscsi/portal_grp.o 00:03:10.085 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:10.085 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:10.085 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:10.085 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:10.085 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:10.342 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:10.342 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:10.342 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:10.342 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:10.342 LIB libspdk_vhost.a 00:03:10.342 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:10.342 CC lib/iscsi/tgt_node.o 00:03:10.342 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:10.600 CC lib/ftl/utils/ftl_conf.o 00:03:10.600 CC lib/ftl/utils/ftl_md.o 00:03:10.600 CC lib/ftl/utils/ftl_mempool.o 00:03:10.600 CC lib/ftl/utils/ftl_bitmap.o 00:03:10.600 CC lib/ftl/utils/ftl_property.o 00:03:10.600 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:10.600 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:10.600 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:10.859 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:10.859 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:10.859 CC lib/iscsi/iscsi_subsystem.o 00:03:10.859 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:10.859 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:10.859 CC lib/iscsi/iscsi_rpc.o 00:03:10.859 CC lib/iscsi/task.o 00:03:10.859 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:10.859 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:11.117 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:11.117 CC lib/ftl/base/ftl_base_dev.o 00:03:11.117 CC lib/ftl/base/ftl_base_bdev.o 00:03:11.117 CC lib/ftl/ftl_trace.o 00:03:11.376 LIB libspdk_iscsi.a 00:03:11.376 LIB libspdk_ftl.a 00:03:11.942 LIB libspdk_nvmf.a 00:03:11.942 CC module/env_dpdk/env_dpdk_rpc.o 00:03:11.942 CC module/blob/bdev/blob_bdev.o 00:03:11.942 CC module/accel/error/accel_error.o 00:03:11.942 CC module/accel/dsa/accel_dsa.o 00:03:11.942 CC module/accel/iaa/accel_iaa.o 00:03:11.942 CC module/sock/posix/posix.o 00:03:11.942 CC module/accel/ioat/accel_ioat.o 00:03:11.942 CC module/scheduler/gscheduler/gscheduler.o 00:03:11.943 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:11.943 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:12.200 LIB libspdk_env_dpdk_rpc.a 00:03:12.200 CC module/accel/ioat/accel_ioat_rpc.o 00:03:12.200 LIB libspdk_scheduler_dpdk_governor.a 00:03:12.200 CC module/accel/error/accel_error_rpc.o 00:03:12.200 LIB libspdk_scheduler_gscheduler.a 00:03:12.200 LIB libspdk_scheduler_dynamic.a 00:03:12.200 CC module/accel/iaa/accel_iaa_rpc.o 00:03:12.200 CC module/accel/dsa/accel_dsa_rpc.o 00:03:12.457 LIB libspdk_accel_ioat.a 00:03:12.457 LIB libspdk_accel_error.a 00:03:12.457 LIB libspdk_accel_dsa.a 00:03:12.457 LIB libspdk_blob_bdev.a 00:03:12.457 LIB libspdk_accel_iaa.a 00:03:12.457 CC module/bdev/delay/vbdev_delay.o 00:03:12.457 CC module/bdev/lvol/vbdev_lvol.o 00:03:12.457 CC module/bdev/malloc/bdev_malloc.o 00:03:12.457 CC module/bdev/null/bdev_null.o 00:03:12.457 CC module/bdev/gpt/gpt.o 00:03:12.714 CC module/bdev/passthru/vbdev_passthru.o 00:03:12.714 CC module/blobfs/bdev/blobfs_bdev.o 00:03:12.714 CC module/bdev/error/vbdev_error.o 00:03:12.714 CC module/bdev/nvme/bdev_nvme.o 00:03:12.714 CC module/bdev/gpt/vbdev_gpt.o 00:03:12.714 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:12.971 CC module/bdev/null/bdev_null_rpc.o 00:03:12.971 CC module/bdev/error/vbdev_error_rpc.o 00:03:12.971 LIB libspdk_sock_posix.a 00:03:12.971 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:12.971 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:12.971 LIB libspdk_blobfs_bdev.a 00:03:12.971 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:12.971 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:12.971 CC module/bdev/nvme/nvme_rpc.o 00:03:12.971 LIB libspdk_bdev_error.a 00:03:12.971 LIB libspdk_bdev_gpt.a 00:03:12.971 LIB libspdk_bdev_null.a 00:03:12.971 LIB libspdk_bdev_passthru.a 00:03:12.971 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:13.229 LIB libspdk_bdev_malloc.a 00:03:13.229 CC module/bdev/raid/bdev_raid.o 00:03:13.229 LIB libspdk_bdev_delay.a 00:03:13.229 CC module/bdev/split/vbdev_split.o 00:03:13.229 CC module/bdev/split/vbdev_split_rpc.o 00:03:13.229 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:13.229 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:13.229 CC module/bdev/aio/bdev_aio.o 00:03:13.229 CC module/bdev/nvme/bdev_mdns_client.o 00:03:13.229 CC module/bdev/nvme/vbdev_opal.o 00:03:13.487 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:13.487 LIB libspdk_bdev_split.a 00:03:13.487 CC module/bdev/aio/bdev_aio_rpc.o 00:03:13.487 LIB libspdk_bdev_lvol.a 00:03:13.487 CC module/bdev/ftl/bdev_ftl.o 00:03:13.487 LIB libspdk_bdev_zone_block.a 00:03:13.487 CC module/bdev/iscsi/bdev_iscsi.o 00:03:13.487 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:13.487 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:13.487 LIB libspdk_bdev_aio.a 00:03:13.744 CC module/bdev/raid/bdev_raid_rpc.o 00:03:13.744 CC module/bdev/raid/bdev_raid_sb.o 00:03:13.744 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:13.744 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:13.744 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:13.744 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:13.744 CC module/bdev/raid/raid0.o 00:03:13.744 CC module/bdev/raid/raid1.o 00:03:14.043 LIB libspdk_bdev_ftl.a 00:03:14.043 CC module/bdev/raid/concat.o 00:03:14.043 CC module/bdev/raid/raid5f.o 00:03:14.043 LIB libspdk_bdev_iscsi.a 00:03:14.300 LIB libspdk_bdev_virtio.a 00:03:14.556 LIB libspdk_bdev_raid.a 00:03:15.120 LIB libspdk_bdev_nvme.a 00:03:15.378 CC module/event/subsystems/vmd/vmd.o 00:03:15.378 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:15.378 CC module/event/subsystems/sock/sock.o 00:03:15.378 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:15.378 CC module/event/subsystems/scheduler/scheduler.o 00:03:15.378 CC module/event/subsystems/iobuf/iobuf.o 00:03:15.378 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:15.635 LIB libspdk_event_vhost_blk.a 00:03:15.635 LIB libspdk_event_sock.a 00:03:15.635 LIB libspdk_event_vmd.a 00:03:15.635 LIB libspdk_event_scheduler.a 00:03:15.635 LIB libspdk_event_iobuf.a 00:03:15.894 CC module/event/subsystems/accel/accel.o 00:03:15.894 LIB libspdk_event_accel.a 00:03:16.152 CC module/event/subsystems/bdev/bdev.o 00:03:16.410 LIB libspdk_event_bdev.a 00:03:16.410 CC module/event/subsystems/scsi/scsi.o 00:03:16.410 CC module/event/subsystems/nbd/nbd.o 00:03:16.410 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:16.410 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:16.668 LIB libspdk_event_nbd.a 00:03:16.668 LIB libspdk_event_scsi.a 00:03:16.668 LIB libspdk_event_nvmf.a 00:03:16.668 CC module/event/subsystems/iscsi/iscsi.o 00:03:16.668 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:16.925 LIB libspdk_event_vhost_scsi.a 00:03:16.925 LIB libspdk_event_iscsi.a 00:03:17.183 CC app/trace_record/trace_record.o 00:03:17.183 CXX app/trace/trace.o 00:03:17.183 CC examples/ioat/perf/perf.o 00:03:17.183 CC examples/nvme/hello_world/hello_world.o 00:03:17.183 CC examples/sock/hello_world/hello_sock.o 00:03:17.183 CC examples/accel/perf/accel_perf.o 00:03:17.183 CC examples/vmd/lsvmd/lsvmd.o 00:03:17.183 CC examples/blob/hello_world/hello_blob.o 00:03:17.183 CC test/accel/dif/dif.o 00:03:17.183 CC examples/bdev/hello_world/hello_bdev.o 00:03:17.441 LINK lsvmd 00:03:17.441 LINK spdk_trace_record 00:03:17.441 LINK ioat_perf 00:03:17.441 LINK hello_world 00:03:17.441 LINK hello_blob 00:03:17.441 LINK hello_sock 00:03:17.441 LINK hello_bdev 00:03:17.441 LINK spdk_trace 00:03:17.699 LINK dif 00:03:17.699 LINK accel_perf 00:03:17.957 CC examples/ioat/verify/verify.o 00:03:17.957 CC app/nvmf_tgt/nvmf_main.o 00:03:17.957 CC app/iscsi_tgt/iscsi_tgt.o 00:03:18.215 LINK nvmf_tgt 00:03:18.215 LINK verify 00:03:18.215 LINK iscsi_tgt 00:03:18.475 CC app/spdk_tgt/spdk_tgt.o 00:03:18.475 CC examples/vmd/led/led.o 00:03:18.733 LINK spdk_tgt 00:03:18.733 CC examples/nvme/reconnect/reconnect.o 00:03:18.733 LINK led 00:03:18.733 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:18.992 CC examples/nvme/arbitration/arbitration.o 00:03:18.992 LINK reconnect 00:03:19.249 LINK arbitration 00:03:19.249 LINK nvme_manage 00:03:19.815 CC examples/nvme/hotplug/hotplug.o 00:03:20.072 LINK hotplug 00:03:20.072 CC app/spdk_lspci/spdk_lspci.o 00:03:20.330 LINK spdk_lspci 00:03:20.330 CC app/spdk_nvme_perf/perf.o 00:03:20.330 CC examples/blob/cli/blobcli.o 00:03:20.589 CC examples/bdev/bdevperf/bdevperf.o 00:03:20.589 CC test/app/bdev_svc/bdev_svc.o 00:03:20.589 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:20.847 LINK bdev_svc 00:03:20.847 LINK blobcli 00:03:21.106 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:21.106 LINK nvme_fuzz 00:03:21.363 LINK cmb_copy 00:03:21.363 LINK spdk_nvme_perf 00:03:21.363 LINK bdevperf 00:03:21.363 CC examples/nvme/abort/abort.o 00:03:21.621 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:21.621 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:21.621 LINK pmr_persistence 00:03:21.880 LINK abort 00:03:22.446 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:22.446 CC test/app/histogram_perf/histogram_perf.o 00:03:22.446 CC app/spdk_nvme_identify/identify.o 00:03:22.446 CC test/app/jsoncat/jsoncat.o 00:03:22.446 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:22.446 LINK histogram_perf 00:03:22.704 LINK jsoncat 00:03:22.963 CC test/app/stub/stub.o 00:03:22.963 LINK vhost_fuzz 00:03:22.963 CC app/spdk_nvme_discover/discovery_aer.o 00:03:22.963 LINK stub 00:03:23.221 LINK spdk_nvme_discover 00:03:23.478 CC examples/util/zipf/zipf.o 00:03:23.478 CC examples/nvmf/nvmf/nvmf.o 00:03:23.478 LINK spdk_nvme_identify 00:03:23.478 LINK zipf 00:03:23.736 LINK nvmf 00:03:23.736 LINK iscsi_fuzz 00:03:23.994 CC examples/idxd/perf/perf.o 00:03:23.994 CC examples/thread/thread/thread_ex.o 00:03:23.994 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:24.252 CC app/spdk_top/spdk_top.o 00:03:24.252 LINK interrupt_tgt 00:03:24.252 LINK thread 00:03:24.252 LINK idxd_perf 00:03:24.510 CC app/vhost/vhost.o 00:03:24.510 CC app/spdk_dd/spdk_dd.o 00:03:24.510 CC app/fio/nvme/fio_plugin.o 00:03:24.768 CC test/bdev/bdevio/bdevio.o 00:03:24.768 LINK vhost 00:03:25.025 TEST_HEADER include/spdk/accel.h 00:03:25.025 TEST_HEADER include/spdk/accel_module.h 00:03:25.025 TEST_HEADER include/spdk/assert.h 00:03:25.025 TEST_HEADER include/spdk/barrier.h 00:03:25.025 TEST_HEADER include/spdk/base64.h 00:03:25.025 TEST_HEADER include/spdk/bdev.h 00:03:25.026 TEST_HEADER include/spdk/bdev_module.h 00:03:25.026 TEST_HEADER include/spdk/bdev_zone.h 00:03:25.026 TEST_HEADER include/spdk/bit_array.h 00:03:25.026 TEST_HEADER include/spdk/bit_pool.h 00:03:25.026 TEST_HEADER include/spdk/blob.h 00:03:25.026 TEST_HEADER include/spdk/blob_bdev.h 00:03:25.026 TEST_HEADER include/spdk/blobfs.h 00:03:25.026 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:25.026 TEST_HEADER include/spdk/conf.h 00:03:25.026 TEST_HEADER include/spdk/config.h 00:03:25.026 TEST_HEADER include/spdk/cpuset.h 00:03:25.026 TEST_HEADER include/spdk/crc16.h 00:03:25.026 TEST_HEADER include/spdk/crc32.h 00:03:25.026 TEST_HEADER include/spdk/crc64.h 00:03:25.026 TEST_HEADER include/spdk/dif.h 00:03:25.026 TEST_HEADER include/spdk/dma.h 00:03:25.026 TEST_HEADER include/spdk/endian.h 00:03:25.026 TEST_HEADER include/spdk/env.h 00:03:25.026 CC test/blobfs/mkfs/mkfs.o 00:03:25.026 TEST_HEADER include/spdk/env_dpdk.h 00:03:25.026 TEST_HEADER include/spdk/event.h 00:03:25.026 TEST_HEADER include/spdk/fd.h 00:03:25.026 TEST_HEADER include/spdk/fd_group.h 00:03:25.026 TEST_HEADER include/spdk/file.h 00:03:25.026 LINK spdk_dd 00:03:25.026 TEST_HEADER include/spdk/ftl.h 00:03:25.026 TEST_HEADER include/spdk/gpt_spec.h 00:03:25.026 TEST_HEADER include/spdk/hexlify.h 00:03:25.026 TEST_HEADER include/spdk/histogram_data.h 00:03:25.026 TEST_HEADER include/spdk/idxd.h 00:03:25.026 TEST_HEADER include/spdk/idxd_spec.h 00:03:25.026 TEST_HEADER include/spdk/init.h 00:03:25.026 TEST_HEADER include/spdk/ioat.h 00:03:25.026 TEST_HEADER include/spdk/ioat_spec.h 00:03:25.026 TEST_HEADER include/spdk/iscsi_spec.h 00:03:25.026 TEST_HEADER include/spdk/json.h 00:03:25.026 TEST_HEADER include/spdk/jsonrpc.h 00:03:25.026 TEST_HEADER include/spdk/likely.h 00:03:25.026 TEST_HEADER include/spdk/log.h 00:03:25.026 TEST_HEADER include/spdk/lvol.h 00:03:25.026 TEST_HEADER include/spdk/memory.h 00:03:25.026 TEST_HEADER include/spdk/mmio.h 00:03:25.026 TEST_HEADER include/spdk/nbd.h 00:03:25.026 TEST_HEADER include/spdk/notify.h 00:03:25.026 TEST_HEADER include/spdk/nvme.h 00:03:25.026 TEST_HEADER include/spdk/nvme_intel.h 00:03:25.026 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:25.026 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:25.026 TEST_HEADER include/spdk/nvme_spec.h 00:03:25.026 TEST_HEADER include/spdk/nvme_zns.h 00:03:25.026 TEST_HEADER include/spdk/nvmf.h 00:03:25.026 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:25.026 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:25.026 TEST_HEADER include/spdk/nvmf_spec.h 00:03:25.026 TEST_HEADER include/spdk/nvmf_transport.h 00:03:25.026 TEST_HEADER include/spdk/opal.h 00:03:25.026 TEST_HEADER include/spdk/opal_spec.h 00:03:25.026 TEST_HEADER include/spdk/pci_ids.h 00:03:25.026 TEST_HEADER include/spdk/pipe.h 00:03:25.026 TEST_HEADER include/spdk/queue.h 00:03:25.026 TEST_HEADER include/spdk/reduce.h 00:03:25.026 TEST_HEADER include/spdk/rpc.h 00:03:25.026 TEST_HEADER include/spdk/scheduler.h 00:03:25.026 TEST_HEADER include/spdk/scsi.h 00:03:25.026 TEST_HEADER include/spdk/scsi_spec.h 00:03:25.026 TEST_HEADER include/spdk/sock.h 00:03:25.026 TEST_HEADER include/spdk/stdinc.h 00:03:25.026 TEST_HEADER include/spdk/string.h 00:03:25.026 TEST_HEADER include/spdk/thread.h 00:03:25.026 TEST_HEADER include/spdk/trace.h 00:03:25.026 TEST_HEADER include/spdk/trace_parser.h 00:03:25.026 TEST_HEADER include/spdk/tree.h 00:03:25.026 TEST_HEADER include/spdk/ublk.h 00:03:25.026 TEST_HEADER include/spdk/util.h 00:03:25.026 LINK spdk_top 00:03:25.026 TEST_HEADER include/spdk/uuid.h 00:03:25.026 TEST_HEADER include/spdk/version.h 00:03:25.026 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:25.026 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:25.026 TEST_HEADER include/spdk/vhost.h 00:03:25.026 TEST_HEADER include/spdk/vmd.h 00:03:25.026 TEST_HEADER include/spdk/xor.h 00:03:25.026 TEST_HEADER include/spdk/zipf.h 00:03:25.026 CXX test/cpp_headers/accel.o 00:03:25.284 LINK bdevio 00:03:25.284 LINK mkfs 00:03:25.284 CXX test/cpp_headers/accel_module.o 00:03:25.541 LINK spdk_nvme 00:03:25.541 CXX test/cpp_headers/assert.o 00:03:25.799 CXX test/cpp_headers/barrier.o 00:03:25.799 CC app/fio/bdev/fio_plugin.o 00:03:25.799 CXX test/cpp_headers/base64.o 00:03:25.799 CXX test/cpp_headers/bdev.o 00:03:26.056 CXX test/cpp_headers/bdev_module.o 00:03:26.056 CC test/dma/test_dma/test_dma.o 00:03:26.314 CXX test/cpp_headers/bdev_zone.o 00:03:26.572 LINK spdk_bdev 00:03:26.572 LINK test_dma 00:03:26.572 CXX test/cpp_headers/bit_array.o 00:03:26.572 CXX test/cpp_headers/bit_pool.o 00:03:26.830 CXX test/cpp_headers/blob.o 00:03:26.830 CC test/env/mem_callbacks/mem_callbacks.o 00:03:27.087 CXX test/cpp_headers/blob_bdev.o 00:03:27.087 CC test/env/vtophys/vtophys.o 00:03:27.087 CXX test/cpp_headers/blobfs.o 00:03:27.345 LINK vtophys 00:03:27.345 LINK mem_callbacks 00:03:27.345 CXX test/cpp_headers/blobfs_bdev.o 00:03:27.345 CXX test/cpp_headers/conf.o 00:03:27.603 CXX test/cpp_headers/config.o 00:03:27.603 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:27.603 CXX test/cpp_headers/cpuset.o 00:03:27.862 LINK env_dpdk_post_init 00:03:27.862 CXX test/cpp_headers/crc16.o 00:03:27.862 CC test/event/event_perf/event_perf.o 00:03:27.862 CXX test/cpp_headers/crc32.o 00:03:28.119 CXX test/cpp_headers/crc64.o 00:03:28.378 CXX test/cpp_headers/dif.o 00:03:28.378 CXX test/cpp_headers/dma.o 00:03:28.378 LINK event_perf 00:03:28.378 CC test/env/memory/memory_ut.o 00:03:28.378 CC test/env/pci/pci_ut.o 00:03:28.636 CXX test/cpp_headers/endian.o 00:03:28.636 CC test/lvol/esnap/esnap.o 00:03:28.636 CC test/nvme/aer/aer.o 00:03:28.636 CXX test/cpp_headers/env.o 00:03:28.636 CC test/rpc_client/rpc_client_test.o 00:03:28.948 CXX test/cpp_headers/env_dpdk.o 00:03:28.948 LINK rpc_client_test 00:03:28.948 LINK pci_ut 00:03:28.948 LINK aer 00:03:28.948 CXX test/cpp_headers/event.o 00:03:28.948 CXX test/cpp_headers/fd.o 00:03:29.206 CXX test/cpp_headers/fd_group.o 00:03:29.206 CC test/thread/poller_perf/poller_perf.o 00:03:29.206 CC test/event/reactor/reactor.o 00:03:29.463 LINK memory_ut 00:03:29.463 CXX test/cpp_headers/file.o 00:03:29.463 LINK poller_perf 00:03:29.463 LINK reactor 00:03:29.463 CC test/event/reactor_perf/reactor_perf.o 00:03:29.463 CC test/thread/lock/spdk_lock.o 00:03:29.721 CXX test/cpp_headers/ftl.o 00:03:29.721 LINK reactor_perf 00:03:29.721 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:29.721 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:29.980 CC test/event/app_repeat/app_repeat.o 00:03:29.980 CXX test/cpp_headers/gpt_spec.o 00:03:29.980 LINK histogram_ut 00:03:29.980 CXX test/cpp_headers/hexlify.o 00:03:29.980 LINK app_repeat 00:03:30.238 CC test/nvme/reset/reset.o 00:03:30.238 CC test/nvme/sgl/sgl.o 00:03:30.238 CC test/nvme/e2edp/nvme_dp.o 00:03:30.238 CXX test/cpp_headers/histogram_data.o 00:03:30.238 CC test/nvme/overhead/overhead.o 00:03:30.495 LINK reset 00:03:30.496 CXX test/cpp_headers/idxd.o 00:03:30.496 LINK sgl 00:03:30.496 LINK nvme_dp 00:03:30.496 CC test/nvme/err_injection/err_injection.o 00:03:30.496 CXX test/cpp_headers/idxd_spec.o 00:03:30.753 LINK overhead 00:03:30.753 LINK err_injection 00:03:30.753 CXX test/cpp_headers/init.o 00:03:31.011 CXX test/cpp_headers/ioat.o 00:03:31.011 CXX test/cpp_headers/ioat_spec.o 00:03:31.268 CC test/event/scheduler/scheduler.o 00:03:31.268 CXX test/cpp_headers/iscsi_spec.o 00:03:31.526 CXX test/cpp_headers/json.o 00:03:31.526 CXX test/cpp_headers/jsonrpc.o 00:03:31.526 LINK scheduler 00:03:31.526 LINK spdk_lock 00:03:31.526 CXX test/cpp_headers/likely.o 00:03:31.526 CXX test/cpp_headers/log.o 00:03:31.526 CC test/nvme/startup/startup.o 00:03:31.783 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:31.783 CXX test/cpp_headers/lvol.o 00:03:31.784 CXX test/cpp_headers/memory.o 00:03:31.784 CC test/nvme/reserve/reserve.o 00:03:31.784 LINK startup 00:03:31.784 CXX test/cpp_headers/mmio.o 00:03:32.041 LINK reserve 00:03:32.041 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:32.041 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:32.299 CXX test/cpp_headers/nbd.o 00:03:32.299 CXX test/cpp_headers/notify.o 00:03:32.299 LINK tree_ut 00:03:32.299 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:32.299 LINK accel_ut 00:03:32.299 CXX test/cpp_headers/nvme.o 00:03:32.557 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:32.557 CXX test/cpp_headers/nvme_intel.o 00:03:32.814 LINK blob_bdev_ut 00:03:32.814 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:32.814 CXX test/cpp_headers/nvme_ocssd.o 00:03:32.814 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:33.072 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:33.072 CC test/nvme/simple_copy/simple_copy.o 00:03:33.072 LINK blobfs_bdev_ut 00:03:33.072 CC test/nvme/connect_stress/connect_stress.o 00:03:33.072 CXX test/cpp_headers/nvme_spec.o 00:03:33.330 CXX test/cpp_headers/nvme_zns.o 00:03:33.330 LINK simple_copy 00:03:33.330 CC test/nvme/boot_partition/boot_partition.o 00:03:33.330 LINK connect_stress 00:03:33.588 CXX test/cpp_headers/nvmf.o 00:03:33.588 LINK boot_partition 00:03:33.588 CXX test/cpp_headers/nvmf_cmd.o 00:03:33.846 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:33.846 LINK blobfs_async_ut 00:03:34.104 LINK blobfs_sync_ut 00:03:34.104 CXX test/cpp_headers/nvmf_spec.o 00:03:34.104 LINK esnap 00:03:34.362 CXX test/cpp_headers/nvmf_transport.o 00:03:34.362 CXX test/cpp_headers/opal.o 00:03:34.362 CXX test/cpp_headers/opal_spec.o 00:03:34.620 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:34.620 CXX test/cpp_headers/pci_ids.o 00:03:34.620 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:34.620 CC test/unit/lib/event/app.c/app_ut.o 00:03:34.620 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:34.620 CC test/nvme/compliance/nvme_compliance.o 00:03:34.620 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:34.620 CXX test/cpp_headers/pipe.o 00:03:34.620 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:34.878 CXX test/cpp_headers/queue.o 00:03:34.878 CXX test/cpp_headers/reduce.o 00:03:34.878 LINK dma_ut 00:03:35.135 LINK nvme_compliance 00:03:35.135 CXX test/cpp_headers/rpc.o 00:03:35.135 LINK ioat_ut 00:03:35.135 CXX test/cpp_headers/scheduler.o 00:03:35.135 LINK init_grp_ut 00:03:35.135 LINK app_ut 00:03:35.135 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:35.135 CXX test/cpp_headers/scsi.o 00:03:35.392 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:35.392 CXX test/cpp_headers/scsi_spec.o 00:03:35.392 CXX test/cpp_headers/sock.o 00:03:35.392 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:35.392 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:35.665 CXX test/cpp_headers/stdinc.o 00:03:35.665 LINK conn_ut 00:03:35.665 CXX test/cpp_headers/string.o 00:03:35.923 LINK param_ut 00:03:35.923 CXX test/cpp_headers/thread.o 00:03:35.923 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:36.181 LINK portal_grp_ut 00:03:36.181 CXX test/cpp_headers/trace.o 00:03:36.181 CC test/nvme/fused_ordering/fused_ordering.o 00:03:36.181 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:36.181 CXX test/cpp_headers/trace_parser.o 00:03:36.439 LINK fused_ordering 00:03:36.439 LINK reactor_ut 00:03:36.439 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:36.439 CXX test/cpp_headers/tree.o 00:03:36.439 CXX test/cpp_headers/ublk.o 00:03:36.697 CXX test/cpp_headers/util.o 00:03:36.697 CC test/unit/lib/log/log.c/log_ut.o 00:03:36.697 CXX test/cpp_headers/uuid.o 00:03:36.697 LINK jsonrpc_server_ut 00:03:36.697 LINK tgt_node_ut 00:03:36.955 CXX test/cpp_headers/version.o 00:03:36.955 CXX test/cpp_headers/vfio_user_pci.o 00:03:36.955 LINK log_ut 00:03:36.956 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:37.213 CXX test/cpp_headers/vfio_user_spec.o 00:03:37.213 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:37.213 CXX test/cpp_headers/vhost.o 00:03:37.213 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:37.213 LINK bdev_ut 00:03:37.472 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:37.472 CXX test/cpp_headers/vmd.o 00:03:37.472 LINK notify_ut 00:03:37.472 CXX test/cpp_headers/xor.o 00:03:37.472 LINK doorbell_aers 00:03:37.729 LINK iscsi_ut 00:03:37.729 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:37.729 CXX test/cpp_headers/zipf.o 00:03:37.729 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:37.987 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:37.987 LINK scsi_nvme_ut 00:03:37.987 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:37.987 LINK part_ut 00:03:38.245 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:38.503 LINK dev_ut 00:03:38.503 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:38.503 LINK nvme_ut 00:03:38.503 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:38.761 CC test/nvme/fdp/fdp.o 00:03:38.761 LINK lun_ut 00:03:38.761 LINK json_parse_ut 00:03:38.761 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:39.019 LINK gpt_ut 00:03:39.019 LINK lvol_ut 00:03:39.019 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:39.019 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:39.019 LINK fdp 00:03:39.277 LINK scsi_ut 00:03:39.277 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:39.277 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:39.277 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:39.536 LINK json_util_ut 00:03:39.794 LINK sock_ut 00:03:39.794 LINK vbdev_lvol_ut 00:03:39.794 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:40.053 LINK json_write_ut 00:03:40.053 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:40.053 LINK blob_ut 00:03:40.053 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:40.053 CC test/nvme/cuse/cuse.o 00:03:40.053 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:40.311 LINK scsi_bdev_ut 00:03:40.311 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:40.569 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:40.826 LINK iobuf_ut 00:03:40.826 LINK bdev_zone_ut 00:03:40.826 LINK scsi_pr_ut 00:03:41.083 LINK posix_ut 00:03:41.083 LINK cuse 00:03:41.083 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:41.083 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:41.340 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:41.340 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:41.340 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:41.907 LINK tcp_ut 00:03:41.907 LINK bdev_raid_ut 00:03:41.907 LINK vbdev_zone_block_ut 00:03:41.907 LINK nvme_ctrlr_ut 00:03:42.166 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:42.166 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:42.166 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:42.166 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:42.166 LINK ctrlr_bdev_ut 00:03:42.423 LINK thread_ut 00:03:42.423 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:42.681 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:42.681 LINK bdev_raid_sb_ut 00:03:42.939 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:42.939 LINK bdev_ut 00:03:43.197 LINK ctrlr_discovery_ut 00:03:43.197 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:43.197 LINK nvmf_ut 00:03:43.455 LINK subsystem_ut 00:03:43.455 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:43.455 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:43.455 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:43.455 LINK concat_ut 00:03:43.455 LINK nvme_ctrlr_cmd_ut 00:03:43.713 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:43.713 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:43.713 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:43.713 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:43.972 LINK ctrlr_ut 00:03:44.230 LINK nvme_ns_ut 00:03:44.230 LINK raid1_ut 00:03:44.488 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:44.488 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:44.746 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:44.747 LINK nvme_poll_group_ut 00:03:45.005 LINK nvme_quirks_ut 00:03:45.005 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:45.005 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:45.263 LINK nvme_ns_ocssd_cmd_ut 00:03:45.263 LINK nvme_ns_cmd_ut 00:03:45.263 LINK nvme_pcie_ut 00:03:45.521 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:45.521 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:45.779 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:45.779 LINK nvme_qpair_ut 00:03:46.068 LINK nvme_transport_ut 00:03:46.068 LINK raid5f_ut 00:03:46.068 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:46.326 LINK nvme_io_msg_ut 00:03:46.326 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:46.326 LINK transport_ut 00:03:46.326 LINK rdma_ut 00:03:46.585 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:46.585 LINK nvme_fabric_ut 00:03:46.585 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:46.585 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:46.845 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:46.845 LINK base64_ut 00:03:46.845 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:46.845 LINK nvme_opal_ut 00:03:47.103 LINK nvme_pcie_common_ut 00:03:47.103 LINK cpuset_ut 00:03:47.103 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:47.103 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:47.103 LINK bit_array_ut 00:03:47.103 LINK pci_event_ut 00:03:47.103 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:47.362 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:47.362 LINK crc16_ut 00:03:47.362 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:47.362 LINK crc32_ieee_ut 00:03:47.362 LINK crc32c_ut 00:03:47.362 LINK nvme_tcp_ut 00:03:47.362 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:47.620 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:47.620 LINK subsystem_ut 00:03:47.620 LINK crc64_ut 00:03:47.620 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:47.620 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:47.878 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:47.878 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:47.878 LINK rpc_ut 00:03:47.878 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:47.878 LINK nvme_cuse_ut 00:03:47.878 LINK idxd_user_ut 00:03:48.135 LINK iov_ut 00:03:48.135 LINK bdev_nvme_ut 00:03:48.135 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:48.135 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:48.135 CC test/unit/lib/util/math.c/math_ut.o 00:03:48.393 LINK nvme_rdma_ut 00:03:48.393 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:48.393 LINK math_ut 00:03:48.393 LINK common_ut 00:03:48.393 LINK idxd_ut 00:03:48.393 LINK ftl_l2p_ut 00:03:48.651 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:48.651 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:48.651 CC test/unit/lib/util/string.c/string_ut.o 00:03:48.651 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:48.651 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:48.651 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:48.651 LINK ftl_bitmap_ut 00:03:48.651 LINK pipe_ut 00:03:48.909 LINK dif_ut 00:03:48.909 LINK xor_ut 00:03:48.909 LINK string_ut 00:03:48.909 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:48.909 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:48.909 LINK ftl_mempool_ut 00:03:49.167 LINK ftl_io_ut 00:03:49.167 LINK ftl_mngt_ut 00:03:49.425 LINK ftl_band_ut 00:03:49.683 LINK vhost_ut 00:03:50.249 LINK ftl_layout_upgrade_ut 00:03:50.249 LINK ftl_sb_ut 00:03:50.507 00:03:50.507 real 1m52.230s 00:03:50.507 user 9m55.179s 00:03:50.507 sys 1m52.405s 00:03:50.507 16:42:39 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:50.507 16:42:39 -- common/autotest_common.sh@10 -- $ set +x 00:03:50.507 ************************************ 00:03:50.507 END TEST unittest_build 00:03:50.507 ************************************ 00:03:50.507 16:42:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:50.507 16:42:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:50.507 16:42:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:50.766 16:42:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:50.766 16:42:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:50.766 16:42:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:50.766 16:42:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:50.766 16:42:39 -- scripts/common.sh@335 -- # IFS=.-: 00:03:50.766 16:42:39 -- scripts/common.sh@335 -- # read -ra ver1 00:03:50.766 16:42:39 -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.766 16:42:39 -- scripts/common.sh@336 -- # read -ra ver2 00:03:50.766 16:42:39 -- scripts/common.sh@337 -- # local 'op=<' 00:03:50.766 16:42:39 -- scripts/common.sh@339 -- # ver1_l=2 00:03:50.766 16:42:39 -- scripts/common.sh@340 -- # ver2_l=1 00:03:50.766 16:42:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:50.766 16:42:39 -- scripts/common.sh@343 -- # case "$op" in 00:03:50.766 16:42:39 -- scripts/common.sh@344 -- # : 1 00:03:50.766 16:42:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:50.766 16:42:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.766 16:42:39 -- scripts/common.sh@364 -- # decimal 1 00:03:50.766 16:42:39 -- scripts/common.sh@352 -- # local d=1 00:03:50.766 16:42:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.766 16:42:39 -- scripts/common.sh@354 -- # echo 1 00:03:50.766 16:42:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:50.766 16:42:39 -- scripts/common.sh@365 -- # decimal 2 00:03:50.766 16:42:39 -- scripts/common.sh@352 -- # local d=2 00:03:50.766 16:42:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.766 16:42:39 -- scripts/common.sh@354 -- # echo 2 00:03:50.766 16:42:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:50.766 16:42:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:50.766 16:42:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:50.766 16:42:39 -- scripts/common.sh@367 -- # return 0 00:03:50.766 16:42:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.766 16:42:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:50.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.766 --rc genhtml_branch_coverage=1 00:03:50.766 --rc genhtml_function_coverage=1 00:03:50.766 --rc genhtml_legend=1 00:03:50.766 --rc geninfo_all_blocks=1 00:03:50.766 --rc geninfo_unexecuted_blocks=1 00:03:50.766 00:03:50.766 ' 00:03:50.766 16:42:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:50.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.766 --rc genhtml_branch_coverage=1 00:03:50.766 --rc genhtml_function_coverage=1 00:03:50.766 --rc genhtml_legend=1 00:03:50.766 --rc geninfo_all_blocks=1 00:03:50.766 --rc geninfo_unexecuted_blocks=1 00:03:50.766 00:03:50.766 ' 00:03:50.766 16:42:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:50.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.766 --rc genhtml_branch_coverage=1 00:03:50.766 --rc genhtml_function_coverage=1 00:03:50.766 --rc genhtml_legend=1 00:03:50.766 --rc geninfo_all_blocks=1 00:03:50.766 --rc geninfo_unexecuted_blocks=1 00:03:50.766 00:03:50.766 ' 00:03:50.766 16:42:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:50.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.766 --rc genhtml_branch_coverage=1 00:03:50.766 --rc genhtml_function_coverage=1 00:03:50.766 --rc genhtml_legend=1 00:03:50.766 --rc geninfo_all_blocks=1 00:03:50.766 --rc geninfo_unexecuted_blocks=1 00:03:50.766 00:03:50.766 ' 00:03:50.766 16:42:39 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:50.766 16:42:39 -- nvmf/common.sh@7 -- # uname -s 00:03:50.766 16:42:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:50.766 16:42:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:50.766 16:42:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:50.766 16:42:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:50.766 16:42:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:50.766 16:42:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:50.766 16:42:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:50.766 16:42:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:50.766 16:42:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:50.766 16:42:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:50.766 16:42:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d359ba08-a58f-4013-814b-f7d8f4343761 00:03:50.766 16:42:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=d359ba08-a58f-4013-814b-f7d8f4343761 00:03:50.766 16:42:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:50.766 16:42:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:50.766 16:42:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:50.766 16:42:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:50.766 16:42:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:50.766 16:42:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:50.766 16:42:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:50.766 16:42:39 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:50.766 16:42:39 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:50.766 16:42:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:50.766 16:42:39 -- paths/export.sh@5 -- # export PATH 00:03:50.766 16:42:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:50.766 16:42:39 -- nvmf/common.sh@46 -- # : 0 00:03:50.766 16:42:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:50.766 16:42:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:50.766 16:42:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:50.766 16:42:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:50.766 16:42:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:50.766 16:42:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:50.766 16:42:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:50.766 16:42:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:50.766 16:42:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:50.766 16:42:39 -- spdk/autotest.sh@32 -- # uname -s 00:03:50.766 16:42:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:50.766 16:42:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:03:50.766 16:42:39 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:50.766 16:42:39 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:50.766 16:42:39 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:50.766 16:42:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:50.766 16:42:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:50.766 16:42:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:03:50.766 16:42:39 -- spdk/autotest.sh@48 -- # udevadm_pid=92489 00:03:50.766 16:42:39 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:03:50.766 16:42:39 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:50.766 16:42:39 -- spdk/autotest.sh@54 -- # echo 92495 00:03:50.766 16:42:39 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:50.766 16:42:39 -- spdk/autotest.sh@56 -- # echo 92496 00:03:50.766 16:42:39 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:50.766 16:42:39 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:50.766 16:42:39 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:50.766 16:42:39 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:50.766 16:42:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:50.766 16:42:39 -- common/autotest_common.sh@10 -- # set +x 00:03:50.766 16:42:39 -- spdk/autotest.sh@70 -- # create_test_list 00:03:50.766 16:42:39 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:50.766 16:42:39 -- common/autotest_common.sh@10 -- # set +x 00:03:50.766 16:42:39 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:50.766 16:42:39 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:50.766 16:42:39 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:50.766 16:42:39 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:50.766 16:42:39 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:50.766 16:42:39 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:50.766 16:42:39 -- common/autotest_common.sh@1450 -- # uname 00:03:50.767 16:42:39 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:50.767 16:42:39 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:50.767 16:42:39 -- common/autotest_common.sh@1470 -- # uname 00:03:50.767 16:42:39 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:50.767 16:42:39 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:50.767 16:42:39 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:51.024 lcov: LCOV version 1.15 00:03:51.024 16:42:39 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:09.101 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:09.101 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:09.101 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:09.101 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:09.101 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:09.101 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:41.173 16:43:26 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:41.173 16:43:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.173 16:43:26 -- common/autotest_common.sh@10 -- # set +x 00:04:41.173 16:43:26 -- spdk/autotest.sh@89 -- # rm -f 00:04:41.173 16:43:26 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:41.173 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:41.173 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:41.173 16:43:26 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:41.173 16:43:26 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:41.173 16:43:26 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:41.173 16:43:26 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:41.173 16:43:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:41.173 16:43:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:41.173 16:43:26 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:41.173 16:43:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:41.173 16:43:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:41.173 16:43:26 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:41.173 16:43:26 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:04:41.173 16:43:26 -- spdk/autotest.sh@108 -- # grep -v p 00:04:41.173 16:43:26 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:41.173 16:43:26 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:41.173 16:43:26 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:41.173 16:43:26 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:41.173 16:43:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:41.173 No valid GPT data, bailing 00:04:41.173 16:43:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:41.173 16:43:26 -- scripts/common.sh@393 -- # pt= 00:04:41.173 16:43:26 -- scripts/common.sh@394 -- # return 1 00:04:41.173 16:43:26 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:41.173 1+0 records in 00:04:41.173 1+0 records out 00:04:41.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00525009 s, 200 MB/s 00:04:41.173 16:43:26 -- spdk/autotest.sh@116 -- # sync 00:04:41.173 16:43:26 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:41.173 16:43:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:41.173 16:43:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:41.173 16:43:27 -- spdk/autotest.sh@122 -- # uname -s 00:04:41.173 16:43:27 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:41.173 16:43:27 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:41.173 16:43:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.173 16:43:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.173 16:43:27 -- common/autotest_common.sh@10 -- # set +x 00:04:41.173 ************************************ 00:04:41.173 START TEST setup.sh 00:04:41.173 ************************************ 00:04:41.173 16:43:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:41.173 * Looking for test storage... 00:04:41.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:41.173 16:43:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:41.173 16:43:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:41.173 16:43:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:41.173 16:43:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:41.173 16:43:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:41.173 16:43:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:41.173 16:43:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:41.173 16:43:27 -- scripts/common.sh@335 -- # IFS=.-: 00:04:41.173 16:43:27 -- scripts/common.sh@335 -- # read -ra ver1 00:04:41.173 16:43:27 -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.173 16:43:27 -- scripts/common.sh@336 -- # read -ra ver2 00:04:41.173 16:43:27 -- scripts/common.sh@337 -- # local 'op=<' 00:04:41.173 16:43:27 -- scripts/common.sh@339 -- # ver1_l=2 00:04:41.173 16:43:27 -- scripts/common.sh@340 -- # ver2_l=1 00:04:41.173 16:43:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:41.173 16:43:27 -- scripts/common.sh@343 -- # case "$op" in 00:04:41.173 16:43:27 -- scripts/common.sh@344 -- # : 1 00:04:41.173 16:43:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:41.173 16:43:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.173 16:43:27 -- scripts/common.sh@364 -- # decimal 1 00:04:41.173 16:43:27 -- scripts/common.sh@352 -- # local d=1 00:04:41.173 16:43:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.173 16:43:27 -- scripts/common.sh@354 -- # echo 1 00:04:41.173 16:43:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:41.173 16:43:28 -- scripts/common.sh@365 -- # decimal 2 00:04:41.173 16:43:28 -- scripts/common.sh@352 -- # local d=2 00:04:41.173 16:43:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.173 16:43:28 -- scripts/common.sh@354 -- # echo 2 00:04:41.173 16:43:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:41.173 16:43:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:41.173 16:43:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:41.173 16:43:28 -- scripts/common.sh@367 -- # return 0 00:04:41.173 16:43:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.173 16:43:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:41.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.173 --rc genhtml_branch_coverage=1 00:04:41.173 --rc genhtml_function_coverage=1 00:04:41.173 --rc genhtml_legend=1 00:04:41.173 --rc geninfo_all_blocks=1 00:04:41.173 --rc geninfo_unexecuted_blocks=1 00:04:41.173 00:04:41.173 ' 00:04:41.173 16:43:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:41.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.173 --rc genhtml_branch_coverage=1 00:04:41.173 --rc genhtml_function_coverage=1 00:04:41.173 --rc genhtml_legend=1 00:04:41.173 --rc geninfo_all_blocks=1 00:04:41.173 --rc geninfo_unexecuted_blocks=1 00:04:41.173 00:04:41.173 ' 00:04:41.173 16:43:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:41.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.173 --rc genhtml_branch_coverage=1 00:04:41.173 --rc genhtml_function_coverage=1 00:04:41.173 --rc genhtml_legend=1 00:04:41.173 --rc geninfo_all_blocks=1 00:04:41.173 --rc geninfo_unexecuted_blocks=1 00:04:41.173 00:04:41.173 ' 00:04:41.173 16:43:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:41.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.173 --rc genhtml_branch_coverage=1 00:04:41.173 --rc genhtml_function_coverage=1 00:04:41.173 --rc genhtml_legend=1 00:04:41.173 --rc geninfo_all_blocks=1 00:04:41.173 --rc geninfo_unexecuted_blocks=1 00:04:41.173 00:04:41.173 ' 00:04:41.173 16:43:28 -- setup/test-setup.sh@10 -- # uname -s 00:04:41.173 16:43:28 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:41.173 16:43:28 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:41.173 16:43:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.173 16:43:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.173 16:43:28 -- common/autotest_common.sh@10 -- # set +x 00:04:41.173 ************************************ 00:04:41.173 START TEST acl 00:04:41.173 ************************************ 00:04:41.173 16:43:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:41.173 * Looking for test storage... 00:04:41.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:41.173 16:43:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:41.173 16:43:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:41.173 16:43:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:41.173 16:43:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:41.173 16:43:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:41.173 16:43:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:41.173 16:43:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:41.173 16:43:28 -- scripts/common.sh@335 -- # IFS=.-: 00:04:41.173 16:43:28 -- scripts/common.sh@335 -- # read -ra ver1 00:04:41.173 16:43:28 -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.173 16:43:28 -- scripts/common.sh@336 -- # read -ra ver2 00:04:41.173 16:43:28 -- scripts/common.sh@337 -- # local 'op=<' 00:04:41.173 16:43:28 -- scripts/common.sh@339 -- # ver1_l=2 00:04:41.173 16:43:28 -- scripts/common.sh@340 -- # ver2_l=1 00:04:41.173 16:43:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:41.173 16:43:28 -- scripts/common.sh@343 -- # case "$op" in 00:04:41.173 16:43:28 -- scripts/common.sh@344 -- # : 1 00:04:41.173 16:43:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:41.173 16:43:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.173 16:43:28 -- scripts/common.sh@364 -- # decimal 1 00:04:41.173 16:43:28 -- scripts/common.sh@352 -- # local d=1 00:04:41.173 16:43:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.173 16:43:28 -- scripts/common.sh@354 -- # echo 1 00:04:41.174 16:43:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:41.174 16:43:28 -- scripts/common.sh@365 -- # decimal 2 00:04:41.174 16:43:28 -- scripts/common.sh@352 -- # local d=2 00:04:41.174 16:43:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.174 16:43:28 -- scripts/common.sh@354 -- # echo 2 00:04:41.174 16:43:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:41.174 16:43:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:41.174 16:43:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:41.174 16:43:28 -- scripts/common.sh@367 -- # return 0 00:04:41.174 16:43:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.174 16:43:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:41.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.174 --rc genhtml_branch_coverage=1 00:04:41.174 --rc genhtml_function_coverage=1 00:04:41.174 --rc genhtml_legend=1 00:04:41.174 --rc geninfo_all_blocks=1 00:04:41.174 --rc geninfo_unexecuted_blocks=1 00:04:41.174 00:04:41.174 ' 00:04:41.174 16:43:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:41.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.174 --rc genhtml_branch_coverage=1 00:04:41.174 --rc genhtml_function_coverage=1 00:04:41.174 --rc genhtml_legend=1 00:04:41.174 --rc geninfo_all_blocks=1 00:04:41.174 --rc geninfo_unexecuted_blocks=1 00:04:41.174 00:04:41.174 ' 00:04:41.174 16:43:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:41.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.174 --rc genhtml_branch_coverage=1 00:04:41.174 --rc genhtml_function_coverage=1 00:04:41.174 --rc genhtml_legend=1 00:04:41.174 --rc geninfo_all_blocks=1 00:04:41.174 --rc geninfo_unexecuted_blocks=1 00:04:41.174 00:04:41.174 ' 00:04:41.174 16:43:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:41.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.174 --rc genhtml_branch_coverage=1 00:04:41.174 --rc genhtml_function_coverage=1 00:04:41.174 --rc genhtml_legend=1 00:04:41.174 --rc geninfo_all_blocks=1 00:04:41.174 --rc geninfo_unexecuted_blocks=1 00:04:41.174 00:04:41.174 ' 00:04:41.174 16:43:28 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:41.174 16:43:28 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:41.174 16:43:28 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:41.174 16:43:28 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:41.174 16:43:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:41.174 16:43:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:41.174 16:43:28 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:41.174 16:43:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:41.174 16:43:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:41.174 16:43:28 -- setup/acl.sh@12 -- # devs=() 00:04:41.174 16:43:28 -- setup/acl.sh@12 -- # declare -a devs 00:04:41.174 16:43:28 -- setup/acl.sh@13 -- # drivers=() 00:04:41.174 16:43:28 -- setup/acl.sh@13 -- # declare -A drivers 00:04:41.174 16:43:28 -- setup/acl.sh@51 -- # setup reset 00:04:41.174 16:43:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.174 16:43:28 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:41.174 16:43:28 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:41.174 16:43:28 -- setup/acl.sh@16 -- # local dev driver 00:04:41.174 16:43:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.174 16:43:28 -- setup/acl.sh@15 -- # setup output status 00:04:41.174 16:43:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.174 16:43:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:41.174 Hugepages 00:04:41.174 node hugesize free / total 00:04:41.174 16:43:28 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:41.174 16:43:28 -- setup/acl.sh@19 -- # continue 00:04:41.174 16:43:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.174 00:04:41.174 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:41.174 16:43:28 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:41.174 16:43:28 -- setup/acl.sh@19 -- # continue 00:04:41.174 16:43:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.174 16:43:28 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:41.174 16:43:28 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:41.174 16:43:28 -- setup/acl.sh@20 -- # continue 00:04:41.174 16:43:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.174 16:43:28 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:41.174 16:43:28 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:41.174 16:43:28 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:41.174 16:43:28 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:41.174 16:43:28 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:41.174 16:43:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.174 16:43:28 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:41.174 16:43:28 -- setup/acl.sh@54 -- # run_test denied denied 00:04:41.174 16:43:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.174 16:43:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.174 16:43:28 -- common/autotest_common.sh@10 -- # set +x 00:04:41.174 ************************************ 00:04:41.174 START TEST denied 00:04:41.174 ************************************ 00:04:41.174 16:43:28 -- common/autotest_common.sh@1114 -- # denied 00:04:41.174 16:43:28 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:41.174 16:43:28 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:41.174 16:43:28 -- setup/acl.sh@38 -- # setup output config 00:04:41.174 16:43:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.174 16:43:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.740 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:41.740 16:43:30 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:41.740 16:43:30 -- setup/acl.sh@28 -- # local dev driver 00:04:41.740 16:43:30 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:41.740 16:43:30 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:41.740 16:43:30 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:41.740 16:43:30 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:41.740 16:43:30 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:41.740 16:43:30 -- setup/acl.sh@41 -- # setup reset 00:04:41.740 16:43:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.740 16:43:30 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:41.999 00:04:41.999 real 0m1.866s 00:04:41.999 user 0m0.469s 00:04:41.999 sys 0m1.447s 00:04:41.999 16:43:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.999 16:43:30 -- common/autotest_common.sh@10 -- # set +x 00:04:41.999 ************************************ 00:04:41.999 END TEST denied 00:04:41.999 ************************************ 00:04:41.999 16:43:30 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:41.999 16:43:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.999 16:43:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.999 16:43:30 -- common/autotest_common.sh@10 -- # set +x 00:04:42.257 ************************************ 00:04:42.257 START TEST allowed 00:04:42.257 ************************************ 00:04:42.257 16:43:30 -- common/autotest_common.sh@1114 -- # allowed 00:04:42.257 16:43:30 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:42.257 16:43:30 -- setup/acl.sh@45 -- # setup output config 00:04:42.257 16:43:30 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:42.257 16:43:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.257 16:43:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:43.632 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.632 16:43:32 -- setup/acl.sh@47 -- # verify 00:04:43.632 16:43:32 -- setup/acl.sh@28 -- # local dev driver 00:04:43.632 16:43:32 -- setup/acl.sh@48 -- # setup reset 00:04:43.632 16:43:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.632 16:43:32 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.201 00:04:44.201 real 0m1.910s 00:04:44.201 user 0m0.446s 00:04:44.201 sys 0m1.468s 00:04:44.201 16:43:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:44.201 ************************************ 00:04:44.201 END TEST allowed 00:04:44.201 ************************************ 00:04:44.201 16:43:32 -- common/autotest_common.sh@10 -- # set +x 00:04:44.201 00:04:44.201 real 0m4.807s 00:04:44.201 user 0m1.511s 00:04:44.201 sys 0m3.410s 00:04:44.201 16:43:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:44.201 16:43:32 -- common/autotest_common.sh@10 -- # set +x 00:04:44.201 ************************************ 00:04:44.201 END TEST acl 00:04:44.201 ************************************ 00:04:44.201 16:43:32 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:44.201 16:43:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.201 16:43:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.201 16:43:32 -- common/autotest_common.sh@10 -- # set +x 00:04:44.201 ************************************ 00:04:44.201 START TEST hugepages 00:04:44.201 ************************************ 00:04:44.201 16:43:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:44.201 * Looking for test storage... 00:04:44.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:44.201 16:43:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:44.201 16:43:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:44.201 16:43:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:44.201 16:43:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:44.201 16:43:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:44.201 16:43:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:44.201 16:43:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:44.201 16:43:33 -- scripts/common.sh@335 -- # IFS=.-: 00:04:44.201 16:43:33 -- scripts/common.sh@335 -- # read -ra ver1 00:04:44.201 16:43:33 -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.201 16:43:33 -- scripts/common.sh@336 -- # read -ra ver2 00:04:44.201 16:43:33 -- scripts/common.sh@337 -- # local 'op=<' 00:04:44.201 16:43:33 -- scripts/common.sh@339 -- # ver1_l=2 00:04:44.201 16:43:33 -- scripts/common.sh@340 -- # ver2_l=1 00:04:44.201 16:43:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:44.201 16:43:33 -- scripts/common.sh@343 -- # case "$op" in 00:04:44.201 16:43:33 -- scripts/common.sh@344 -- # : 1 00:04:44.201 16:43:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:44.201 16:43:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.201 16:43:33 -- scripts/common.sh@364 -- # decimal 1 00:04:44.201 16:43:33 -- scripts/common.sh@352 -- # local d=1 00:04:44.201 16:43:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.201 16:43:33 -- scripts/common.sh@354 -- # echo 1 00:04:44.201 16:43:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:44.201 16:43:33 -- scripts/common.sh@365 -- # decimal 2 00:04:44.201 16:43:33 -- scripts/common.sh@352 -- # local d=2 00:04:44.201 16:43:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.201 16:43:33 -- scripts/common.sh@354 -- # echo 2 00:04:44.201 16:43:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:44.201 16:43:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:44.201 16:43:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:44.201 16:43:33 -- scripts/common.sh@367 -- # return 0 00:04:44.201 16:43:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.201 16:43:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:44.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.201 --rc genhtml_branch_coverage=1 00:04:44.201 --rc genhtml_function_coverage=1 00:04:44.201 --rc genhtml_legend=1 00:04:44.201 --rc geninfo_all_blocks=1 00:04:44.201 --rc geninfo_unexecuted_blocks=1 00:04:44.201 00:04:44.201 ' 00:04:44.201 16:43:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:44.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.201 --rc genhtml_branch_coverage=1 00:04:44.201 --rc genhtml_function_coverage=1 00:04:44.201 --rc genhtml_legend=1 00:04:44.201 --rc geninfo_all_blocks=1 00:04:44.201 --rc geninfo_unexecuted_blocks=1 00:04:44.201 00:04:44.201 ' 00:04:44.201 16:43:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:44.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.201 --rc genhtml_branch_coverage=1 00:04:44.201 --rc genhtml_function_coverage=1 00:04:44.201 --rc genhtml_legend=1 00:04:44.201 --rc geninfo_all_blocks=1 00:04:44.201 --rc geninfo_unexecuted_blocks=1 00:04:44.201 00:04:44.201 ' 00:04:44.201 16:43:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:44.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.201 --rc genhtml_branch_coverage=1 00:04:44.201 --rc genhtml_function_coverage=1 00:04:44.201 --rc genhtml_legend=1 00:04:44.201 --rc geninfo_all_blocks=1 00:04:44.201 --rc geninfo_unexecuted_blocks=1 00:04:44.201 00:04:44.201 ' 00:04:44.201 16:43:33 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:44.201 16:43:33 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:44.201 16:43:33 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:44.201 16:43:33 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:44.201 16:43:33 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:44.201 16:43:33 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:44.201 16:43:33 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:44.201 16:43:33 -- setup/common.sh@18 -- # local node= 00:04:44.201 16:43:33 -- setup/common.sh@19 -- # local var val 00:04:44.201 16:43:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.201 16:43:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.201 16:43:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.201 16:43:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.201 16:43:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.201 16:43:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 2961740 kB' 'MemAvailable: 7394776 kB' 'Buffers: 35688 kB' 'Cached: 4536192 kB' 'SwapCached: 0 kB' 'Active: 999152 kB' 'Inactive: 3704212 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 142072 kB' 'Active(file): 998076 kB' 'Inactive(file): 3562140 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 160796 kB' 'Mapped: 68548 kB' 'Shmem: 2600 kB' 'KReclaimable: 194124 kB' 'Slab: 257732 kB' 'SReclaimable: 194124 kB' 'SUnreclaim: 63608 kB' 'KernelStack: 4508 kB' 'PageTables: 3628 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024336 kB' 'Committed_AS: 507216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.201 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # continue 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:43:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:43:33 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.202 16:43:33 -- setup/common.sh@33 -- # echo 2048 00:04:44.202 16:43:33 -- setup/common.sh@33 -- # return 0 00:04:44.202 16:43:33 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:44.202 16:43:33 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:44.202 16:43:33 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:44.202 16:43:33 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:44.202 16:43:33 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:44.202 16:43:33 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:44.202 16:43:33 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:44.202 16:43:33 -- setup/hugepages.sh@207 -- # get_nodes 00:04:44.202 16:43:33 -- setup/hugepages.sh@27 -- # local node 00:04:44.202 16:43:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.461 16:43:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:44.461 16:43:33 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:44.461 16:43:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.461 16:43:33 -- setup/hugepages.sh@208 -- # clear_hp 00:04:44.461 16:43:33 -- setup/hugepages.sh@37 -- # local node hp 00:04:44.461 16:43:33 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:44.461 16:43:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.461 16:43:33 -- setup/hugepages.sh@41 -- # echo 0 00:04:44.461 16:43:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.461 16:43:33 -- setup/hugepages.sh@41 -- # echo 0 00:04:44.461 16:43:33 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:44.461 16:43:33 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:44.461 16:43:33 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:44.461 16:43:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.461 16:43:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.461 16:43:33 -- common/autotest_common.sh@10 -- # set +x 00:04:44.461 ************************************ 00:04:44.461 START TEST default_setup 00:04:44.461 ************************************ 00:04:44.461 16:43:33 -- common/autotest_common.sh@1114 -- # default_setup 00:04:44.461 16:43:33 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:44.461 16:43:33 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:44.461 16:43:33 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:44.461 16:43:33 -- setup/hugepages.sh@51 -- # shift 00:04:44.461 16:43:33 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:44.461 16:43:33 -- setup/hugepages.sh@52 -- # local node_ids 00:04:44.461 16:43:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.461 16:43:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:44.461 16:43:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:44.461 16:43:33 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:44.461 16:43:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.461 16:43:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:44.461 16:43:33 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:44.461 16:43:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.461 16:43:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.461 16:43:33 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:44.461 16:43:33 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:44.461 16:43:33 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:44.461 16:43:33 -- setup/hugepages.sh@73 -- # return 0 00:04:44.461 16:43:33 -- setup/hugepages.sh@137 -- # setup output 00:04:44.461 16:43:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.461 16:43:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.719 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:44.719 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.293 16:43:34 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:45.293 16:43:34 -- setup/hugepages.sh@89 -- # local node 00:04:45.293 16:43:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.293 16:43:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.293 16:43:34 -- setup/hugepages.sh@92 -- # local surp 00:04:45.293 16:43:34 -- setup/hugepages.sh@93 -- # local resv 00:04:45.293 16:43:34 -- setup/hugepages.sh@94 -- # local anon 00:04:45.293 16:43:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.293 16:43:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.293 16:43:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.293 16:43:34 -- setup/common.sh@18 -- # local node= 00:04:45.293 16:43:34 -- setup/common.sh@19 -- # local var val 00:04:45.293 16:43:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.293 16:43:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.293 16:43:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.293 16:43:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.293 16:43:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.293 16:43:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5052304 kB' 'MemAvailable: 9485252 kB' 'Buffers: 35688 kB' 'Cached: 4536276 kB' 'SwapCached: 0 kB' 'Active: 999188 kB' 'Inactive: 3705952 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 143764 kB' 'Active(file): 998108 kB' 'Inactive(file): 3562188 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 162664 kB' 'Mapped: 67992 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257996 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 64040 kB' 'KernelStack: 4416 kB' 'PageTables: 3660 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 509320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.293 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.293 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.294 16:43:34 -- setup/common.sh@33 -- # echo 0 00:04:45.294 16:43:34 -- setup/common.sh@33 -- # return 0 00:04:45.294 16:43:34 -- setup/hugepages.sh@97 -- # anon=0 00:04:45.294 16:43:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.294 16:43:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.294 16:43:34 -- setup/common.sh@18 -- # local node= 00:04:45.294 16:43:34 -- setup/common.sh@19 -- # local var val 00:04:45.294 16:43:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.294 16:43:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.294 16:43:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.294 16:43:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.294 16:43:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.294 16:43:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5052052 kB' 'MemAvailable: 9485004 kB' 'Buffers: 35688 kB' 'Cached: 4536276 kB' 'SwapCached: 0 kB' 'Active: 999188 kB' 'Inactive: 3705896 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 143704 kB' 'Active(file): 998108 kB' 'Inactive(file): 3562192 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 162360 kB' 'Mapped: 67936 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257996 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 64040 kB' 'KernelStack: 4352 kB' 'PageTables: 3532 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 509320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.294 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.294 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.295 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.556 16:43:34 -- setup/common.sh@33 -- # echo 0 00:04:45.556 16:43:34 -- setup/common.sh@33 -- # return 0 00:04:45.556 16:43:34 -- setup/hugepages.sh@99 -- # surp=0 00:04:45.556 16:43:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.556 16:43:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.556 16:43:34 -- setup/common.sh@18 -- # local node= 00:04:45.556 16:43:34 -- setup/common.sh@19 -- # local var val 00:04:45.556 16:43:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.556 16:43:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.556 16:43:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.556 16:43:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.556 16:43:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.556 16:43:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5051800 kB' 'MemAvailable: 9484756 kB' 'Buffers: 35688 kB' 'Cached: 4536276 kB' 'SwapCached: 0 kB' 'Active: 999180 kB' 'Inactive: 3706004 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 143808 kB' 'Active(file): 998108 kB' 'Inactive(file): 3562196 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 162496 kB' 'Mapped: 67936 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257948 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63992 kB' 'KernelStack: 4368 kB' 'PageTables: 3556 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 509320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:43:34 -- setup/common.sh@33 -- # echo 0 00:04:45.557 16:43:34 -- setup/common.sh@33 -- # return 0 00:04:45.557 16:43:34 -- setup/hugepages.sh@100 -- # resv=0 00:04:45.557 nr_hugepages=1024 00:04:45.557 16:43:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:45.557 resv_hugepages=0 00:04:45.557 16:43:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.557 surplus_hugepages=0 00:04:45.557 16:43:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.557 anon_hugepages=0 00:04:45.557 16:43:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.557 16:43:34 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.557 16:43:34 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:45.557 16:43:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.557 16:43:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.557 16:43:34 -- setup/common.sh@18 -- # local node= 00:04:45.557 16:43:34 -- setup/common.sh@19 -- # local var val 00:04:45.557 16:43:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.557 16:43:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.557 16:43:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.557 16:43:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.557 16:43:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.557 16:43:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5051800 kB' 'MemAvailable: 9484756 kB' 'Buffers: 35688 kB' 'Cached: 4536276 kB' 'SwapCached: 0 kB' 'Active: 999180 kB' 'Inactive: 3706004 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 143808 kB' 'Active(file): 998108 kB' 'Inactive(file): 3562196 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 162496 kB' 'Mapped: 67936 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257948 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63992 kB' 'KernelStack: 4368 kB' 'PageTables: 3556 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 509320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:43:34 -- setup/common.sh@33 -- # echo 1024 00:04:45.558 16:43:34 -- setup/common.sh@33 -- # return 0 00:04:45.558 16:43:34 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.558 16:43:34 -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.558 16:43:34 -- setup/hugepages.sh@27 -- # local node 00:04:45.558 16:43:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.558 16:43:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:45.558 16:43:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:45.558 16:43:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.558 16:43:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.558 16:43:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.558 16:43:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.558 16:43:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.558 16:43:34 -- setup/common.sh@18 -- # local node=0 00:04:45.558 16:43:34 -- setup/common.sh@19 -- # local var val 00:04:45.558 16:43:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.558 16:43:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.558 16:43:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.559 16:43:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.559 16:43:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.559 16:43:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5051800 kB' 'MemUsed: 7191180 kB' 'SwapCached: 0 kB' 'Active: 999180 kB' 'Inactive: 3706044 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 143848 kB' 'Active(file): 998108 kB' 'Inactive(file): 3562196 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'FilePages: 4571964 kB' 'Mapped: 67936 kB' 'AnonPages: 162308 kB' 'Shmem: 2596 kB' 'KernelStack: 4468 kB' 'PageTables: 3644 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193956 kB' 'Slab: 257948 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # continue 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:43:34 -- setup/common.sh@33 -- # echo 0 00:04:45.559 16:43:34 -- setup/common.sh@33 -- # return 0 00:04:45.559 16:43:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.559 16:43:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.559 16:43:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.559 16:43:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.559 node0=1024 expecting 1024 00:04:45.559 16:43:34 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:45.559 16:43:34 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:45.559 00:04:45.559 real 0m1.153s 00:04:45.559 user 0m0.311s 00:04:45.559 sys 0m0.831s 00:04:45.559 16:43:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.559 16:43:34 -- common/autotest_common.sh@10 -- # set +x 00:04:45.560 ************************************ 00:04:45.560 END TEST default_setup 00:04:45.560 ************************************ 00:04:45.560 16:43:34 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:45.560 16:43:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.560 16:43:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.560 16:43:34 -- common/autotest_common.sh@10 -- # set +x 00:04:45.560 ************************************ 00:04:45.560 START TEST per_node_1G_alloc 00:04:45.560 ************************************ 00:04:45.560 16:43:34 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:45.560 16:43:34 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:45.560 16:43:34 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:45.560 16:43:34 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:45.560 16:43:34 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:45.560 16:43:34 -- setup/hugepages.sh@51 -- # shift 00:04:45.560 16:43:34 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:45.560 16:43:34 -- setup/hugepages.sh@52 -- # local node_ids 00:04:45.560 16:43:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.560 16:43:34 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:45.560 16:43:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:45.560 16:43:34 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:45.560 16:43:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.560 16:43:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:45.560 16:43:34 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:45.560 16:43:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.560 16:43:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.560 16:43:34 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:45.560 16:43:34 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:45.560 16:43:34 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:45.560 16:43:34 -- setup/hugepages.sh@73 -- # return 0 00:04:45.560 16:43:34 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:45.560 16:43:34 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:45.560 16:43:34 -- setup/hugepages.sh@146 -- # setup output 00:04:45.560 16:43:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.560 16:43:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.817 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:45.817 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.076 16:43:34 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:46.076 16:43:34 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:46.076 16:43:34 -- setup/hugepages.sh@89 -- # local node 00:04:46.076 16:43:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.076 16:43:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.076 16:43:34 -- setup/hugepages.sh@92 -- # local surp 00:04:46.076 16:43:34 -- setup/hugepages.sh@93 -- # local resv 00:04:46.076 16:43:34 -- setup/hugepages.sh@94 -- # local anon 00:04:46.076 16:43:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.076 16:43:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.076 16:43:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.076 16:43:34 -- setup/common.sh@18 -- # local node= 00:04:46.076 16:43:34 -- setup/common.sh@19 -- # local var val 00:04:46.076 16:43:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.076 16:43:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.076 16:43:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.076 16:43:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.076 16:43:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.076 16:43:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6093512 kB' 'MemAvailable: 10526468 kB' 'Buffers: 35688 kB' 'Cached: 4536276 kB' 'SwapCached: 0 kB' 'Active: 999192 kB' 'Inactive: 3706324 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 144124 kB' 'Active(file): 998104 kB' 'Inactive(file): 3562200 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 384 kB' 'Writeback: 0 kB' 'AnonPages: 162716 kB' 'Mapped: 67916 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257576 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63620 kB' 'KernelStack: 4464 kB' 'PageTables: 3752 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 509320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.076 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.076 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.077 16:43:34 -- setup/common.sh@33 -- # echo 0 00:04:46.077 16:43:34 -- setup/common.sh@33 -- # return 0 00:04:46.077 16:43:34 -- setup/hugepages.sh@97 -- # anon=0 00:04:46.077 16:43:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.077 16:43:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.077 16:43:34 -- setup/common.sh@18 -- # local node= 00:04:46.077 16:43:34 -- setup/common.sh@19 -- # local var val 00:04:46.077 16:43:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.077 16:43:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.077 16:43:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.077 16:43:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.077 16:43:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.077 16:43:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.077 16:43:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6094028 kB' 'MemAvailable: 10526984 kB' 'Buffers: 35688 kB' 'Cached: 4536276 kB' 'SwapCached: 0 kB' 'Active: 999192 kB' 'Inactive: 3706172 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 143972 kB' 'Active(file): 998104 kB' 'Inactive(file): 3562200 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 384 kB' 'Writeback: 0 kB' 'AnonPages: 162540 kB' 'Mapped: 67912 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257576 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63620 kB' 'KernelStack: 4440 kB' 'PageTables: 3684 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 509320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.077 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.077 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.339 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.339 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.340 16:43:34 -- setup/common.sh@33 -- # echo 0 00:04:46.340 16:43:34 -- setup/common.sh@33 -- # return 0 00:04:46.340 16:43:34 -- setup/hugepages.sh@99 -- # surp=0 00:04:46.340 16:43:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.340 16:43:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.340 16:43:34 -- setup/common.sh@18 -- # local node= 00:04:46.340 16:43:34 -- setup/common.sh@19 -- # local var val 00:04:46.340 16:43:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.340 16:43:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.340 16:43:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.340 16:43:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.340 16:43:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.340 16:43:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6094028 kB' 'MemAvailable: 10526984 kB' 'Buffers: 35688 kB' 'Cached: 4536276 kB' 'SwapCached: 0 kB' 'Active: 999192 kB' 'Inactive: 3706172 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 143972 kB' 'Active(file): 998104 kB' 'Inactive(file): 3562200 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 384 kB' 'Writeback: 0 kB' 'AnonPages: 162540 kB' 'Mapped: 67912 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257576 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63620 kB' 'KernelStack: 4440 kB' 'PageTables: 3684 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 509320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 16:43:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.341 16:43:34 -- setup/common.sh@33 -- # echo 0 00:04:46.341 16:43:34 -- setup/common.sh@33 -- # return 0 00:04:46.341 16:43:34 -- setup/hugepages.sh@100 -- # resv=0 00:04:46.341 nr_hugepages=512 00:04:46.341 16:43:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:46.341 resv_hugepages=0 00:04:46.341 16:43:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.341 surplus_hugepages=0 00:04:46.341 16:43:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.341 anon_hugepages=0 00:04:46.341 16:43:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.341 16:43:35 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:46.341 16:43:35 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:46.341 16:43:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.341 16:43:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.341 16:43:35 -- setup/common.sh@18 -- # local node= 00:04:46.341 16:43:35 -- setup/common.sh@19 -- # local var val 00:04:46.341 16:43:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.341 16:43:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.341 16:43:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.341 16:43:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.341 16:43:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.341 16:43:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6094268 kB' 'MemAvailable: 10527224 kB' 'Buffers: 35688 kB' 'Cached: 4536276 kB' 'SwapCached: 0 kB' 'Active: 999192 kB' 'Inactive: 3705820 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 143620 kB' 'Active(file): 998104 kB' 'Inactive(file): 3562200 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 384 kB' 'Writeback: 0 kB' 'AnonPages: 162448 kB' 'Mapped: 67912 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257576 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63620 kB' 'KernelStack: 4476 kB' 'PageTables: 3616 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 509320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.341 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.342 16:43:35 -- setup/common.sh@33 -- # echo 512 00:04:46.342 16:43:35 -- setup/common.sh@33 -- # return 0 00:04:46.342 16:43:35 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:46.342 16:43:35 -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.342 16:43:35 -- setup/hugepages.sh@27 -- # local node 00:04:46.342 16:43:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.342 16:43:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.342 16:43:35 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.342 16:43:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.342 16:43:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.342 16:43:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.342 16:43:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.342 16:43:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.342 16:43:35 -- setup/common.sh@18 -- # local node=0 00:04:46.342 16:43:35 -- setup/common.sh@19 -- # local var val 00:04:46.342 16:43:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.342 16:43:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.342 16:43:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.342 16:43:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.342 16:43:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.342 16:43:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6094788 kB' 'MemUsed: 6148192 kB' 'SwapCached: 0 kB' 'Active: 999192 kB' 'Inactive: 3706072 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 143872 kB' 'Active(file): 998104 kB' 'Inactive(file): 3562200 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 384 kB' 'Writeback: 0 kB' 'FilePages: 4571964 kB' 'Mapped: 67912 kB' 'AnonPages: 162420 kB' 'Shmem: 2596 kB' 'KernelStack: 4528 kB' 'PageTables: 3840 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193956 kB' 'Slab: 257576 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.342 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # continue 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 16:43:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.343 16:43:35 -- setup/common.sh@33 -- # echo 0 00:04:46.343 16:43:35 -- setup/common.sh@33 -- # return 0 00:04:46.343 16:43:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.343 16:43:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.343 16:43:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.343 16:43:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.343 node0=512 expecting 512 00:04:46.343 16:43:35 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:46.343 16:43:35 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:46.343 00:04:46.343 real 0m0.741s 00:04:46.343 user 0m0.316s 00:04:46.343 sys 0m0.466s 00:04:46.343 16:43:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.343 16:43:35 -- common/autotest_common.sh@10 -- # set +x 00:04:46.343 ************************************ 00:04:46.343 END TEST per_node_1G_alloc 00:04:46.343 ************************************ 00:04:46.343 16:43:35 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:46.343 16:43:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.343 16:43:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.343 16:43:35 -- common/autotest_common.sh@10 -- # set +x 00:04:46.343 ************************************ 00:04:46.343 START TEST even_2G_alloc 00:04:46.344 ************************************ 00:04:46.344 16:43:35 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:46.344 16:43:35 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:46.344 16:43:35 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:46.344 16:43:35 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:46.344 16:43:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.344 16:43:35 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:46.344 16:43:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:46.344 16:43:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.344 16:43:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.344 16:43:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:46.344 16:43:35 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:46.344 16:43:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.344 16:43:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.344 16:43:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.344 16:43:35 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:46.344 16:43:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.344 16:43:35 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:46.344 16:43:35 -- setup/hugepages.sh@83 -- # : 0 00:04:46.344 16:43:35 -- setup/hugepages.sh@84 -- # : 0 00:04:46.344 16:43:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.344 16:43:35 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:46.344 16:43:35 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:46.344 16:43:35 -- setup/hugepages.sh@153 -- # setup output 00:04:46.344 16:43:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.344 16:43:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.602 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:46.602 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.172 16:43:35 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:47.172 16:43:35 -- setup/hugepages.sh@89 -- # local node 00:04:47.172 16:43:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.172 16:43:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.172 16:43:35 -- setup/hugepages.sh@92 -- # local surp 00:04:47.172 16:43:35 -- setup/hugepages.sh@93 -- # local resv 00:04:47.172 16:43:35 -- setup/hugepages.sh@94 -- # local anon 00:04:47.172 16:43:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.172 16:43:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.172 16:43:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.172 16:43:35 -- setup/common.sh@18 -- # local node= 00:04:47.172 16:43:35 -- setup/common.sh@19 -- # local var val 00:04:47.172 16:43:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.172 16:43:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.172 16:43:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.172 16:43:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.172 16:43:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.172 16:43:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.172 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5049408 kB' 'MemAvailable: 9482364 kB' 'Buffers: 35688 kB' 'Cached: 4536276 kB' 'SwapCached: 0 kB' 'Active: 999212 kB' 'Inactive: 3706112 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 143940 kB' 'Active(file): 998132 kB' 'Inactive(file): 3562172 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 162600 kB' 'Mapped: 67968 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257720 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63764 kB' 'KernelStack: 4384 kB' 'PageTables: 3608 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 509520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.173 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.173 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.174 16:43:35 -- setup/common.sh@33 -- # echo 0 00:04:47.174 16:43:35 -- setup/common.sh@33 -- # return 0 00:04:47.174 16:43:35 -- setup/hugepages.sh@97 -- # anon=0 00:04:47.174 16:43:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.174 16:43:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.174 16:43:35 -- setup/common.sh@18 -- # local node= 00:04:47.174 16:43:35 -- setup/common.sh@19 -- # local var val 00:04:47.174 16:43:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.174 16:43:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.174 16:43:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.174 16:43:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.174 16:43:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.174 16:43:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5049408 kB' 'MemAvailable: 9482364 kB' 'Buffers: 35688 kB' 'Cached: 4536276 kB' 'SwapCached: 0 kB' 'Active: 999212 kB' 'Inactive: 3706132 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 143960 kB' 'Active(file): 998132 kB' 'Inactive(file): 3562172 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 162620 kB' 'Mapped: 67968 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257720 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63764 kB' 'KernelStack: 4400 kB' 'PageTables: 3640 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 509520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:35 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.175 16:43:36 -- setup/common.sh@33 -- # echo 0 00:04:47.175 16:43:36 -- setup/common.sh@33 -- # return 0 00:04:47.175 16:43:36 -- setup/hugepages.sh@99 -- # surp=0 00:04:47.175 16:43:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.175 16:43:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.175 16:43:36 -- setup/common.sh@18 -- # local node= 00:04:47.175 16:43:36 -- setup/common.sh@19 -- # local var val 00:04:47.175 16:43:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.175 16:43:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.175 16:43:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.175 16:43:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.175 16:43:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.175 16:43:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5049408 kB' 'MemAvailable: 9482364 kB' 'Buffers: 35688 kB' 'Cached: 4536276 kB' 'SwapCached: 0 kB' 'Active: 999204 kB' 'Inactive: 3705980 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 143808 kB' 'Active(file): 998132 kB' 'Inactive(file): 3562172 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 162452 kB' 'Mapped: 67936 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257720 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63764 kB' 'KernelStack: 4400 kB' 'PageTables: 3632 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 509520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 16:43:36 -- setup/common.sh@33 -- # echo 0 00:04:47.176 16:43:36 -- setup/common.sh@33 -- # return 0 00:04:47.176 16:43:36 -- setup/hugepages.sh@100 -- # resv=0 00:04:47.176 16:43:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:47.176 nr_hugepages=1024 00:04:47.176 resv_hugepages=0 00:04:47.176 16:43:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.176 surplus_hugepages=0 00:04:47.176 16:43:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.176 anon_hugepages=0 00:04:47.176 16:43:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.176 16:43:36 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.176 16:43:36 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:47.176 16:43:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.176 16:43:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.176 16:43:36 -- setup/common.sh@18 -- # local node= 00:04:47.176 16:43:36 -- setup/common.sh@19 -- # local var val 00:04:47.176 16:43:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.176 16:43:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.176 16:43:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.176 16:43:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.176 16:43:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.176 16:43:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5049672 kB' 'MemAvailable: 9482628 kB' 'Buffers: 35688 kB' 'Cached: 4536276 kB' 'SwapCached: 0 kB' 'Active: 999204 kB' 'Inactive: 3705980 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 143808 kB' 'Active(file): 998132 kB' 'Inactive(file): 3562172 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 162452 kB' 'Mapped: 67936 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257720 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63764 kB' 'KernelStack: 4468 kB' 'PageTables: 3632 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 509520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.176 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 16:43:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.177 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 16:43:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.177 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 16:43:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.177 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 16:43:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.436 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.437 16:43:36 -- setup/common.sh@33 -- # echo 1024 00:04:47.437 16:43:36 -- setup/common.sh@33 -- # return 0 00:04:47.437 16:43:36 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.437 16:43:36 -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.437 16:43:36 -- setup/hugepages.sh@27 -- # local node 00:04:47.437 16:43:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.437 16:43:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:47.437 16:43:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:47.437 16:43:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.437 16:43:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.437 16:43:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.437 16:43:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.437 16:43:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.437 16:43:36 -- setup/common.sh@18 -- # local node=0 00:04:47.437 16:43:36 -- setup/common.sh@19 -- # local var val 00:04:47.437 16:43:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.437 16:43:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.437 16:43:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.437 16:43:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.437 16:43:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.437 16:43:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5049672 kB' 'MemUsed: 7193308 kB' 'SwapCached: 0 kB' 'Active: 999204 kB' 'Inactive: 3705980 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 143808 kB' 'Active(file): 998132 kB' 'Inactive(file): 3562172 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 4571964 kB' 'Mapped: 67936 kB' 'AnonPages: 162452 kB' 'Shmem: 2596 kB' 'KernelStack: 4468 kB' 'PageTables: 3632 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193956 kB' 'Slab: 257720 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.437 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # continue 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 16:43:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 16:43:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 16:43:36 -- setup/common.sh@33 -- # echo 0 00:04:47.438 16:43:36 -- setup/common.sh@33 -- # return 0 00:04:47.438 16:43:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.438 16:43:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.438 16:43:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.438 16:43:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.438 node0=1024 expecting 1024 00:04:47.438 16:43:36 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:47.438 16:43:36 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:47.438 00:04:47.438 real 0m0.981s 00:04:47.438 user 0m0.273s 00:04:47.438 sys 0m0.747s 00:04:47.438 16:43:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:47.438 16:43:36 -- common/autotest_common.sh@10 -- # set +x 00:04:47.438 ************************************ 00:04:47.438 END TEST even_2G_alloc 00:04:47.438 ************************************ 00:04:47.438 16:43:36 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:47.438 16:43:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.438 16:43:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.438 16:43:36 -- common/autotest_common.sh@10 -- # set +x 00:04:47.438 ************************************ 00:04:47.438 START TEST odd_alloc 00:04:47.438 ************************************ 00:04:47.438 16:43:36 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:47.438 16:43:36 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:47.438 16:43:36 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:47.438 16:43:36 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:47.438 16:43:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.438 16:43:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:47.438 16:43:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:47.438 16:43:36 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:47.438 16:43:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.438 16:43:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:47.438 16:43:36 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:47.438 16:43:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.438 16:43:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.438 16:43:36 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:47.438 16:43:36 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:47.438 16:43:36 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.438 16:43:36 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:47.438 16:43:36 -- setup/hugepages.sh@83 -- # : 0 00:04:47.438 16:43:36 -- setup/hugepages.sh@84 -- # : 0 00:04:47.438 16:43:36 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.438 16:43:36 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:47.438 16:43:36 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:47.438 16:43:36 -- setup/hugepages.sh@160 -- # setup output 00:04:47.438 16:43:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.438 16:43:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.696 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:47.697 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:48.269 16:43:37 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:48.269 16:43:37 -- setup/hugepages.sh@89 -- # local node 00:04:48.269 16:43:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:48.269 16:43:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:48.269 16:43:37 -- setup/hugepages.sh@92 -- # local surp 00:04:48.269 16:43:37 -- setup/hugepages.sh@93 -- # local resv 00:04:48.269 16:43:37 -- setup/hugepages.sh@94 -- # local anon 00:04:48.269 16:43:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:48.269 16:43:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:48.269 16:43:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:48.269 16:43:37 -- setup/common.sh@18 -- # local node= 00:04:48.269 16:43:37 -- setup/common.sh@19 -- # local var val 00:04:48.269 16:43:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.269 16:43:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.269 16:43:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.269 16:43:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.269 16:43:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.269 16:43:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.269 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.269 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.269 16:43:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5049540 kB' 'MemAvailable: 9482500 kB' 'Buffers: 35688 kB' 'Cached: 4536280 kB' 'SwapCached: 0 kB' 'Active: 999220 kB' 'Inactive: 3705872 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 143704 kB' 'Active(file): 998140 kB' 'Inactive(file): 3562168 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 162364 kB' 'Mapped: 67952 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257672 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63716 kB' 'KernelStack: 4384 kB' 'PageTables: 3596 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 509648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:48.269 16:43:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.269 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.269 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.269 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.269 16:43:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.269 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.269 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.269 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.269 16:43:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.269 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.269 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.269 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.269 16:43:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.269 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.269 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.269 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.269 16:43:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.269 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.269 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.269 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.269 16:43:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.269 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.269 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.269 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.269 16:43:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.269 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.269 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.269 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.269 16:43:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.269 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.270 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.270 16:43:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.271 16:43:37 -- setup/common.sh@33 -- # echo 0 00:04:48.271 16:43:37 -- setup/common.sh@33 -- # return 0 00:04:48.271 16:43:37 -- setup/hugepages.sh@97 -- # anon=0 00:04:48.271 16:43:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:48.271 16:43:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.271 16:43:37 -- setup/common.sh@18 -- # local node= 00:04:48.271 16:43:37 -- setup/common.sh@19 -- # local var val 00:04:48.271 16:43:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.271 16:43:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.271 16:43:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.271 16:43:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.271 16:43:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.271 16:43:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5049540 kB' 'MemAvailable: 9482500 kB' 'Buffers: 35688 kB' 'Cached: 4536280 kB' 'SwapCached: 0 kB' 'Active: 999212 kB' 'Inactive: 3705748 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 143580 kB' 'Active(file): 998140 kB' 'Inactive(file): 3562168 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 162472 kB' 'Mapped: 67936 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257688 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63732 kB' 'KernelStack: 4384 kB' 'PageTables: 3588 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 509648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.271 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.271 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.272 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.272 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.272 16:43:37 -- setup/common.sh@33 -- # echo 0 00:04:48.272 16:43:37 -- setup/common.sh@33 -- # return 0 00:04:48.272 16:43:37 -- setup/hugepages.sh@99 -- # surp=0 00:04:48.272 16:43:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.272 16:43:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.272 16:43:37 -- setup/common.sh@18 -- # local node= 00:04:48.272 16:43:37 -- setup/common.sh@19 -- # local var val 00:04:48.272 16:43:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.272 16:43:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.272 16:43:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.272 16:43:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.272 16:43:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.272 16:43:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5049540 kB' 'MemAvailable: 9482500 kB' 'Buffers: 35688 kB' 'Cached: 4536280 kB' 'SwapCached: 0 kB' 'Active: 999212 kB' 'Inactive: 3705956 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 143788 kB' 'Active(file): 998140 kB' 'Inactive(file): 3562168 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 162412 kB' 'Mapped: 67936 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257688 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63732 kB' 'KernelStack: 4368 kB' 'PageTables: 3552 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 509648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.273 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.273 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.274 16:43:37 -- setup/common.sh@33 -- # echo 0 00:04:48.274 16:43:37 -- setup/common.sh@33 -- # return 0 00:04:48.274 16:43:37 -- setup/hugepages.sh@100 -- # resv=0 00:04:48.274 nr_hugepages=1025 00:04:48.274 16:43:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:48.274 resv_hugepages=0 00:04:48.274 16:43:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.274 surplus_hugepages=0 00:04:48.274 16:43:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.274 anon_hugepages=0 00:04:48.274 16:43:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.274 16:43:37 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:48.274 16:43:37 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:48.274 16:43:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.274 16:43:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.274 16:43:37 -- setup/common.sh@18 -- # local node= 00:04:48.274 16:43:37 -- setup/common.sh@19 -- # local var val 00:04:48.274 16:43:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.274 16:43:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.274 16:43:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.274 16:43:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.274 16:43:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.274 16:43:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5049540 kB' 'MemAvailable: 9482500 kB' 'Buffers: 35688 kB' 'Cached: 4536280 kB' 'SwapCached: 0 kB' 'Active: 999212 kB' 'Inactive: 3705956 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 143788 kB' 'Active(file): 998140 kB' 'Inactive(file): 3562168 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 162672 kB' 'Mapped: 67936 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257688 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63732 kB' 'KernelStack: 4436 kB' 'PageTables: 3552 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 509648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.274 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.274 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.275 16:43:37 -- setup/common.sh@33 -- # echo 1025 00:04:48.275 16:43:37 -- setup/common.sh@33 -- # return 0 00:04:48.275 16:43:37 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:48.275 16:43:37 -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.275 16:43:37 -- setup/hugepages.sh@27 -- # local node 00:04:48.275 16:43:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.275 16:43:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:48.275 16:43:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:48.275 16:43:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.275 16:43:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.275 16:43:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.275 16:43:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.275 16:43:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.275 16:43:37 -- setup/common.sh@18 -- # local node=0 00:04:48.275 16:43:37 -- setup/common.sh@19 -- # local var val 00:04:48.275 16:43:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.275 16:43:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.275 16:43:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.275 16:43:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.275 16:43:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.275 16:43:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5049540 kB' 'MemUsed: 7193440 kB' 'SwapCached: 0 kB' 'Active: 999212 kB' 'Inactive: 3705952 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 143784 kB' 'Active(file): 998140 kB' 'Inactive(file): 3562168 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 4571968 kB' 'Mapped: 67936 kB' 'AnonPages: 162408 kB' 'Shmem: 2596 kB' 'KernelStack: 4420 kB' 'PageTables: 3516 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193956 kB' 'Slab: 257688 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.275 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.275 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.276 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.276 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 16:43:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.535 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 16:43:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.535 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 16:43:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.535 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 16:43:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.535 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 16:43:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.535 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 16:43:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.535 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.535 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 16:43:37 -- setup/common.sh@32 -- # continue 00:04:48.535 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 16:43:37 -- setup/common.sh@33 -- # echo 0 00:04:48.535 16:43:37 -- setup/common.sh@33 -- # return 0 00:04:48.535 16:43:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.535 16:43:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.535 16:43:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.535 16:43:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.535 node0=1025 expecting 1025 00:04:48.535 16:43:37 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:48.535 16:43:37 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:48.535 00:04:48.535 real 0m1.012s 00:04:48.535 user 0m0.279s 00:04:48.535 sys 0m0.771s 00:04:48.535 16:43:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.535 16:43:37 -- common/autotest_common.sh@10 -- # set +x 00:04:48.535 ************************************ 00:04:48.535 END TEST odd_alloc 00:04:48.535 ************************************ 00:04:48.535 16:43:37 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:48.535 16:43:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.535 16:43:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.535 16:43:37 -- common/autotest_common.sh@10 -- # set +x 00:04:48.535 ************************************ 00:04:48.535 START TEST custom_alloc 00:04:48.535 ************************************ 00:04:48.535 16:43:37 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:48.535 16:43:37 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:48.535 16:43:37 -- setup/hugepages.sh@169 -- # local node 00:04:48.535 16:43:37 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:48.535 16:43:37 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:48.535 16:43:37 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:48.535 16:43:37 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:48.535 16:43:37 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:48.535 16:43:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:48.535 16:43:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.535 16:43:37 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:48.535 16:43:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:48.535 16:43:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:48.535 16:43:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.535 16:43:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:48.535 16:43:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:48.535 16:43:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.535 16:43:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.535 16:43:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:48.535 16:43:37 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:48.535 16:43:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.535 16:43:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:48.535 16:43:37 -- setup/hugepages.sh@83 -- # : 0 00:04:48.535 16:43:37 -- setup/hugepages.sh@84 -- # : 0 00:04:48.535 16:43:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.535 16:43:37 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:48.535 16:43:37 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:48.535 16:43:37 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:48.535 16:43:37 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:48.535 16:43:37 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:48.535 16:43:37 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:48.535 16:43:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:48.535 16:43:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.535 16:43:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:48.535 16:43:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:48.535 16:43:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.535 16:43:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.535 16:43:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:48.535 16:43:37 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:48.535 16:43:37 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:48.535 16:43:37 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:48.535 16:43:37 -- setup/hugepages.sh@78 -- # return 0 00:04:48.535 16:43:37 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:48.535 16:43:37 -- setup/hugepages.sh@187 -- # setup output 00:04:48.535 16:43:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.535 16:43:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:48.793 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:48.793 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:49.054 16:43:37 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:49.054 16:43:37 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:49.054 16:43:37 -- setup/hugepages.sh@89 -- # local node 00:04:49.054 16:43:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.054 16:43:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.054 16:43:37 -- setup/hugepages.sh@92 -- # local surp 00:04:49.054 16:43:37 -- setup/hugepages.sh@93 -- # local resv 00:04:49.054 16:43:37 -- setup/hugepages.sh@94 -- # local anon 00:04:49.054 16:43:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.054 16:43:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.054 16:43:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.054 16:43:37 -- setup/common.sh@18 -- # local node= 00:04:49.054 16:43:37 -- setup/common.sh@19 -- # local var val 00:04:49.054 16:43:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.054 16:43:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.054 16:43:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.054 16:43:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.054 16:43:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.054 16:43:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.054 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6103944 kB' 'MemAvailable: 10536904 kB' 'Buffers: 35688 kB' 'Cached: 4536280 kB' 'SwapCached: 0 kB' 'Active: 999220 kB' 'Inactive: 3702068 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 139904 kB' 'Active(file): 998144 kB' 'Inactive(file): 3562164 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 158528 kB' 'Mapped: 67296 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257688 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63732 kB' 'KernelStack: 4304 kB' 'PageTables: 3356 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 498968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19380 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.055 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.055 16:43:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.055 16:43:37 -- setup/common.sh@33 -- # echo 0 00:04:49.055 16:43:37 -- setup/common.sh@33 -- # return 0 00:04:49.056 16:43:37 -- setup/hugepages.sh@97 -- # anon=0 00:04:49.056 16:43:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.056 16:43:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.056 16:43:37 -- setup/common.sh@18 -- # local node= 00:04:49.056 16:43:37 -- setup/common.sh@19 -- # local var val 00:04:49.056 16:43:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.056 16:43:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.056 16:43:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.056 16:43:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.056 16:43:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.056 16:43:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6103944 kB' 'MemAvailable: 10536904 kB' 'Buffers: 35696 kB' 'Cached: 4536272 kB' 'SwapCached: 0 kB' 'Active: 999220 kB' 'Inactive: 3702260 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 140096 kB' 'Active(file): 998144 kB' 'Inactive(file): 3562164 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 158720 kB' 'Mapped: 67296 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257688 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63732 kB' 'KernelStack: 4272 kB' 'PageTables: 3284 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 498968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.056 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.056 16:43:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.057 16:43:37 -- setup/common.sh@33 -- # echo 0 00:04:49.057 16:43:37 -- setup/common.sh@33 -- # return 0 00:04:49.057 16:43:37 -- setup/hugepages.sh@99 -- # surp=0 00:04:49.057 16:43:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.057 16:43:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.057 16:43:37 -- setup/common.sh@18 -- # local node= 00:04:49.057 16:43:37 -- setup/common.sh@19 -- # local var val 00:04:49.057 16:43:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.057 16:43:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.057 16:43:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.057 16:43:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.057 16:43:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.057 16:43:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6104208 kB' 'MemAvailable: 10537168 kB' 'Buffers: 35696 kB' 'Cached: 4536272 kB' 'SwapCached: 0 kB' 'Active: 999212 kB' 'Inactive: 3702104 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 139940 kB' 'Active(file): 998144 kB' 'Inactive(file): 3562164 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 158556 kB' 'Mapped: 67264 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257688 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63732 kB' 'KernelStack: 4288 kB' 'PageTables: 3312 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 498968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.057 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.057 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.058 16:43:37 -- setup/common.sh@33 -- # echo 0 00:04:49.058 16:43:37 -- setup/common.sh@33 -- # return 0 00:04:49.058 16:43:37 -- setup/hugepages.sh@100 -- # resv=0 00:04:49.058 nr_hugepages=512 00:04:49.058 16:43:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:49.058 resv_hugepages=0 00:04:49.058 16:43:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.058 surplus_hugepages=0 00:04:49.058 16:43:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.058 anon_hugepages=0 00:04:49.058 16:43:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.058 16:43:37 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:49.058 16:43:37 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:49.058 16:43:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.058 16:43:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.058 16:43:37 -- setup/common.sh@18 -- # local node= 00:04:49.058 16:43:37 -- setup/common.sh@19 -- # local var val 00:04:49.058 16:43:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.058 16:43:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.058 16:43:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.058 16:43:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.058 16:43:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.058 16:43:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6104988 kB' 'MemAvailable: 10537948 kB' 'Buffers: 35696 kB' 'Cached: 4536272 kB' 'SwapCached: 0 kB' 'Active: 999212 kB' 'Inactive: 3702104 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 139940 kB' 'Active(file): 998144 kB' 'Inactive(file): 3562164 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 158556 kB' 'Mapped: 67264 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257688 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63732 kB' 'KernelStack: 4356 kB' 'PageTables: 3312 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 498968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.058 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.058 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.059 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.059 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.059 16:43:37 -- setup/common.sh@33 -- # echo 512 00:04:49.060 16:43:37 -- setup/common.sh@33 -- # return 0 00:04:49.060 16:43:37 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:49.060 16:43:37 -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.060 16:43:37 -- setup/hugepages.sh@27 -- # local node 00:04:49.060 16:43:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.060 16:43:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:49.060 16:43:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:49.060 16:43:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.060 16:43:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.060 16:43:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.060 16:43:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.060 16:43:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.060 16:43:37 -- setup/common.sh@18 -- # local node=0 00:04:49.060 16:43:37 -- setup/common.sh@19 -- # local var val 00:04:49.060 16:43:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.060 16:43:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.060 16:43:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.060 16:43:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.060 16:43:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.060 16:43:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6104988 kB' 'MemUsed: 6137992 kB' 'SwapCached: 0 kB' 'Active: 999220 kB' 'Inactive: 3701960 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 139796 kB' 'Active(file): 998152 kB' 'Inactive(file): 3562164 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 4571976 kB' 'Mapped: 67264 kB' 'AnonPages: 158400 kB' 'Shmem: 2596 kB' 'KernelStack: 4408 kB' 'PageTables: 3536 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193956 kB' 'Slab: 257688 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.320 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.320 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # continue 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.321 16:43:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.321 16:43:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.321 16:43:37 -- setup/common.sh@33 -- # echo 0 00:04:49.321 16:43:37 -- setup/common.sh@33 -- # return 0 00:04:49.321 16:43:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.321 16:43:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.321 16:43:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.321 16:43:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.321 node0=512 expecting 512 00:04:49.321 16:43:37 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:49.321 16:43:37 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:49.321 00:04:49.321 real 0m0.746s 00:04:49.321 user 0m0.289s 00:04:49.321 sys 0m0.494s 00:04:49.321 16:43:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.321 16:43:37 -- common/autotest_common.sh@10 -- # set +x 00:04:49.321 ************************************ 00:04:49.321 END TEST custom_alloc 00:04:49.321 ************************************ 00:04:49.321 16:43:37 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:49.321 16:43:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.321 16:43:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.321 16:43:37 -- common/autotest_common.sh@10 -- # set +x 00:04:49.321 ************************************ 00:04:49.321 START TEST no_shrink_alloc 00:04:49.321 ************************************ 00:04:49.321 16:43:38 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:49.321 16:43:38 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:49.321 16:43:38 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:49.321 16:43:38 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:49.321 16:43:38 -- setup/hugepages.sh@51 -- # shift 00:04:49.321 16:43:38 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:49.321 16:43:38 -- setup/hugepages.sh@52 -- # local node_ids 00:04:49.321 16:43:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.321 16:43:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:49.321 16:43:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:49.321 16:43:38 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:49.321 16:43:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.321 16:43:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:49.321 16:43:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:49.321 16:43:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.321 16:43:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.321 16:43:38 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:49.321 16:43:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:49.321 16:43:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:49.321 16:43:38 -- setup/hugepages.sh@73 -- # return 0 00:04:49.321 16:43:38 -- setup/hugepages.sh@198 -- # setup output 00:04:49.321 16:43:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.321 16:43:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:49.580 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:49.580 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:50.150 16:43:38 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:50.150 16:43:38 -- setup/hugepages.sh@89 -- # local node 00:04:50.150 16:43:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:50.150 16:43:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:50.150 16:43:38 -- setup/hugepages.sh@92 -- # local surp 00:04:50.150 16:43:38 -- setup/hugepages.sh@93 -- # local resv 00:04:50.150 16:43:38 -- setup/hugepages.sh@94 -- # local anon 00:04:50.150 16:43:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.150 16:43:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.150 16:43:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.150 16:43:38 -- setup/common.sh@18 -- # local node= 00:04:50.150 16:43:38 -- setup/common.sh@19 -- # local var val 00:04:50.150 16:43:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.150 16:43:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.150 16:43:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.150 16:43:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.150 16:43:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.150 16:43:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.150 16:43:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5056008 kB' 'MemAvailable: 9488976 kB' 'Buffers: 35696 kB' 'Cached: 4536280 kB' 'SwapCached: 0 kB' 'Active: 999240 kB' 'Inactive: 3702128 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 139976 kB' 'Active(file): 998164 kB' 'Inactive(file): 3562152 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 158700 kB' 'Mapped: 67324 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257880 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63924 kB' 'KernelStack: 4320 kB' 'PageTables: 3392 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19380 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.150 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.150 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.151 16:43:38 -- setup/common.sh@33 -- # echo 0 00:04:50.151 16:43:38 -- setup/common.sh@33 -- # return 0 00:04:50.151 16:43:38 -- setup/hugepages.sh@97 -- # anon=0 00:04:50.151 16:43:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.151 16:43:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.151 16:43:38 -- setup/common.sh@18 -- # local node= 00:04:50.151 16:43:38 -- setup/common.sh@19 -- # local var val 00:04:50.151 16:43:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.151 16:43:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.151 16:43:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.151 16:43:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.151 16:43:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.151 16:43:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5056008 kB' 'MemAvailable: 9488976 kB' 'Buffers: 35696 kB' 'Cached: 4536280 kB' 'SwapCached: 0 kB' 'Active: 999240 kB' 'Inactive: 3702128 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 139976 kB' 'Active(file): 998164 kB' 'Inactive(file): 3562152 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 158700 kB' 'Mapped: 67324 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257880 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63924 kB' 'KernelStack: 4320 kB' 'PageTables: 3392 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.151 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.151 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.152 16:43:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.152 16:43:38 -- setup/common.sh@33 -- # echo 0 00:04:50.152 16:43:38 -- setup/common.sh@33 -- # return 0 00:04:50.152 16:43:38 -- setup/hugepages.sh@99 -- # surp=0 00:04:50.152 16:43:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:50.152 16:43:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:50.152 16:43:38 -- setup/common.sh@18 -- # local node= 00:04:50.152 16:43:38 -- setup/common.sh@19 -- # local var val 00:04:50.152 16:43:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.152 16:43:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.152 16:43:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.152 16:43:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.152 16:43:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.152 16:43:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.152 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5055756 kB' 'MemAvailable: 9488724 kB' 'Buffers: 35696 kB' 'Cached: 4536280 kB' 'SwapCached: 0 kB' 'Active: 999232 kB' 'Inactive: 3701996 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 139844 kB' 'Active(file): 998164 kB' 'Inactive(file): 3562152 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 158508 kB' 'Mapped: 67264 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257896 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63940 kB' 'KernelStack: 4320 kB' 'PageTables: 3380 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.153 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.153 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.154 16:43:38 -- setup/common.sh@33 -- # echo 0 00:04:50.154 16:43:38 -- setup/common.sh@33 -- # return 0 00:04:50.154 16:43:38 -- setup/hugepages.sh@100 -- # resv=0 00:04:50.154 nr_hugepages=1024 00:04:50.154 16:43:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:50.154 resv_hugepages=0 00:04:50.154 16:43:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:50.154 surplus_hugepages=0 00:04:50.154 16:43:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:50.154 anon_hugepages=0 00:04:50.154 16:43:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:50.154 16:43:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.154 16:43:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:50.154 16:43:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:50.154 16:43:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:50.154 16:43:38 -- setup/common.sh@18 -- # local node= 00:04:50.154 16:43:38 -- setup/common.sh@19 -- # local var val 00:04:50.154 16:43:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.154 16:43:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.154 16:43:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.154 16:43:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.154 16:43:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.154 16:43:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5055756 kB' 'MemAvailable: 9488724 kB' 'Buffers: 35696 kB' 'Cached: 4536280 kB' 'SwapCached: 0 kB' 'Active: 999232 kB' 'Inactive: 3701972 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 139820 kB' 'Active(file): 998164 kB' 'Inactive(file): 3562152 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 158480 kB' 'Mapped: 67264 kB' 'Shmem: 2596 kB' 'KReclaimable: 193956 kB' 'Slab: 257896 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63940 kB' 'KernelStack: 4288 kB' 'PageTables: 3308 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.154 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.154 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.155 16:43:38 -- setup/common.sh@33 -- # echo 1024 00:04:50.155 16:43:38 -- setup/common.sh@33 -- # return 0 00:04:50.155 16:43:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.155 16:43:38 -- setup/hugepages.sh@112 -- # get_nodes 00:04:50.155 16:43:38 -- setup/hugepages.sh@27 -- # local node 00:04:50.155 16:43:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.155 16:43:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:50.155 16:43:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:50.155 16:43:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.155 16:43:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.155 16:43:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.155 16:43:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:50.155 16:43:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.155 16:43:38 -- setup/common.sh@18 -- # local node=0 00:04:50.155 16:43:38 -- setup/common.sh@19 -- # local var val 00:04:50.155 16:43:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.155 16:43:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.155 16:43:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:50.155 16:43:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:50.155 16:43:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.155 16:43:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5055756 kB' 'MemUsed: 7187224 kB' 'SwapCached: 0 kB' 'Active: 999232 kB' 'Inactive: 3701844 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 139692 kB' 'Active(file): 998164 kB' 'Inactive(file): 3562152 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 4571976 kB' 'Mapped: 67264 kB' 'AnonPages: 158332 kB' 'Shmem: 2596 kB' 'KernelStack: 4292 kB' 'PageTables: 3164 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193956 kB' 'Slab: 257896 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.155 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.155 16:43:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # continue 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.156 16:43:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.156 16:43:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.156 16:43:38 -- setup/common.sh@33 -- # echo 0 00:04:50.156 16:43:38 -- setup/common.sh@33 -- # return 0 00:04:50.156 16:43:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.156 16:43:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.156 16:43:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.156 16:43:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.156 node0=1024 expecting 1024 00:04:50.156 16:43:39 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:50.156 16:43:39 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:50.156 16:43:39 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:50.156 16:43:39 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:50.156 16:43:39 -- setup/hugepages.sh@202 -- # setup output 00:04:50.156 16:43:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.156 16:43:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:50.415 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:50.678 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:50.678 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:50.678 16:43:39 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:50.678 16:43:39 -- setup/hugepages.sh@89 -- # local node 00:04:50.678 16:43:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:50.678 16:43:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:50.678 16:43:39 -- setup/hugepages.sh@92 -- # local surp 00:04:50.678 16:43:39 -- setup/hugepages.sh@93 -- # local resv 00:04:50.678 16:43:39 -- setup/hugepages.sh@94 -- # local anon 00:04:50.678 16:43:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.678 16:43:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.678 16:43:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.678 16:43:39 -- setup/common.sh@18 -- # local node= 00:04:50.678 16:43:39 -- setup/common.sh@19 -- # local var val 00:04:50.678 16:43:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.678 16:43:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.678 16:43:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.678 16:43:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.678 16:43:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.678 16:43:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5052052 kB' 'MemAvailable: 9485020 kB' 'Buffers: 35696 kB' 'Cached: 4536272 kB' 'SwapCached: 0 kB' 'Active: 999244 kB' 'Inactive: 3702264 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 140112 kB' 'Active(file): 998164 kB' 'Inactive(file): 3562152 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 408 kB' 'Writeback: 0 kB' 'AnonPages: 158700 kB' 'Mapped: 67228 kB' 'Shmem: 2588 kB' 'KReclaimable: 193956 kB' 'Slab: 257944 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63988 kB' 'KernelStack: 4428 kB' 'PageTables: 3640 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19372 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.678 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.678 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.679 16:43:39 -- setup/common.sh@33 -- # echo 0 00:04:50.679 16:43:39 -- setup/common.sh@33 -- # return 0 00:04:50.679 16:43:39 -- setup/hugepages.sh@97 -- # anon=0 00:04:50.679 16:43:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.679 16:43:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.679 16:43:39 -- setup/common.sh@18 -- # local node= 00:04:50.679 16:43:39 -- setup/common.sh@19 -- # local var val 00:04:50.679 16:43:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.679 16:43:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.679 16:43:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.679 16:43:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.679 16:43:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.679 16:43:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5053084 kB' 'MemAvailable: 9486052 kB' 'Buffers: 35696 kB' 'Cached: 4536272 kB' 'SwapCached: 0 kB' 'Active: 999244 kB' 'Inactive: 3702360 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 140208 kB' 'Active(file): 998164 kB' 'Inactive(file): 3562152 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 408 kB' 'Writeback: 0 kB' 'AnonPages: 158784 kB' 'Mapped: 67268 kB' 'Shmem: 2588 kB' 'KReclaimable: 193956 kB' 'Slab: 257944 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63988 kB' 'KernelStack: 4280 kB' 'PageTables: 3428 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19372 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.679 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.679 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.680 16:43:39 -- setup/common.sh@33 -- # echo 0 00:04:50.680 16:43:39 -- setup/common.sh@33 -- # return 0 00:04:50.680 16:43:39 -- setup/hugepages.sh@99 -- # surp=0 00:04:50.680 16:43:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:50.680 16:43:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:50.680 16:43:39 -- setup/common.sh@18 -- # local node= 00:04:50.680 16:43:39 -- setup/common.sh@19 -- # local var val 00:04:50.680 16:43:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.680 16:43:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.680 16:43:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.680 16:43:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.680 16:43:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.680 16:43:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5053092 kB' 'MemAvailable: 9486060 kB' 'Buffers: 35696 kB' 'Cached: 4536272 kB' 'SwapCached: 0 kB' 'Active: 999232 kB' 'Inactive: 3702080 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 139928 kB' 'Active(file): 998164 kB' 'Inactive(file): 3562152 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 408 kB' 'Writeback: 0 kB' 'AnonPages: 158488 kB' 'Mapped: 67264 kB' 'Shmem: 2588 kB' 'KReclaimable: 193956 kB' 'Slab: 258040 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 64084 kB' 'KernelStack: 4304 kB' 'PageTables: 3584 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19372 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.680 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.680 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.681 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.681 16:43:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.681 16:43:39 -- setup/common.sh@33 -- # echo 0 00:04:50.681 16:43:39 -- setup/common.sh@33 -- # return 0 00:04:50.682 16:43:39 -- setup/hugepages.sh@100 -- # resv=0 00:04:50.682 nr_hugepages=1024 00:04:50.682 16:43:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:50.682 resv_hugepages=0 00:04:50.682 16:43:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:50.682 surplus_hugepages=0 00:04:50.682 16:43:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:50.682 anon_hugepages=0 00:04:50.682 16:43:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:50.682 16:43:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.682 16:43:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:50.682 16:43:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:50.682 16:43:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:50.682 16:43:39 -- setup/common.sh@18 -- # local node= 00:04:50.682 16:43:39 -- setup/common.sh@19 -- # local var val 00:04:50.682 16:43:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.682 16:43:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.682 16:43:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.682 16:43:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.682 16:43:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.682 16:43:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5055420 kB' 'MemAvailable: 9488388 kB' 'Buffers: 35696 kB' 'Cached: 4536272 kB' 'SwapCached: 0 kB' 'Active: 999232 kB' 'Inactive: 3702040 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 139888 kB' 'Active(file): 998164 kB' 'Inactive(file): 3562152 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 408 kB' 'Writeback: 0 kB' 'AnonPages: 158672 kB' 'Mapped: 67264 kB' 'Shmem: 2588 kB' 'KReclaimable: 193956 kB' 'Slab: 257904 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63948 kB' 'KernelStack: 4348 kB' 'PageTables: 3712 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.682 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.682 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.683 16:43:39 -- setup/common.sh@33 -- # echo 1024 00:04:50.683 16:43:39 -- setup/common.sh@33 -- # return 0 00:04:50.683 16:43:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.683 16:43:39 -- setup/hugepages.sh@112 -- # get_nodes 00:04:50.683 16:43:39 -- setup/hugepages.sh@27 -- # local node 00:04:50.683 16:43:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.683 16:43:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:50.683 16:43:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:50.683 16:43:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.683 16:43:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.683 16:43:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.683 16:43:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:50.683 16:43:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.683 16:43:39 -- setup/common.sh@18 -- # local node=0 00:04:50.683 16:43:39 -- setup/common.sh@19 -- # local var val 00:04:50.683 16:43:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.683 16:43:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.683 16:43:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:50.683 16:43:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:50.683 16:43:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.683 16:43:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5055168 kB' 'MemUsed: 7187812 kB' 'SwapCached: 0 kB' 'Active: 999232 kB' 'Inactive: 3701980 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 139828 kB' 'Active(file): 998164 kB' 'Inactive(file): 3562152 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 408 kB' 'Writeback: 0 kB' 'FilePages: 4571968 kB' 'Mapped: 67264 kB' 'AnonPages: 158348 kB' 'Shmem: 2588 kB' 'KernelStack: 4332 kB' 'PageTables: 3416 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193956 kB' 'Slab: 257904 kB' 'SReclaimable: 193956 kB' 'SUnreclaim: 63948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.683 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.683 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # continue 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.684 16:43:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.684 16:43:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.684 16:43:39 -- setup/common.sh@33 -- # echo 0 00:04:50.684 16:43:39 -- setup/common.sh@33 -- # return 0 00:04:50.684 16:43:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.684 16:43:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.684 16:43:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.684 16:43:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.684 node0=1024 expecting 1024 00:04:50.684 16:43:39 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:50.684 16:43:39 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:50.684 00:04:50.684 real 0m1.472s 00:04:50.684 user 0m0.642s 00:04:50.684 sys 0m0.908s 00:04:50.684 16:43:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.684 16:43:39 -- common/autotest_common.sh@10 -- # set +x 00:04:50.684 ************************************ 00:04:50.684 END TEST no_shrink_alloc 00:04:50.684 ************************************ 00:04:50.684 16:43:39 -- setup/hugepages.sh@217 -- # clear_hp 00:04:50.684 16:43:39 -- setup/hugepages.sh@37 -- # local node hp 00:04:50.684 16:43:39 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:50.684 16:43:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.684 16:43:39 -- setup/hugepages.sh@41 -- # echo 0 00:04:50.684 16:43:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.684 16:43:39 -- setup/hugepages.sh@41 -- # echo 0 00:04:50.684 16:43:39 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:50.684 16:43:39 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:50.684 00:04:50.684 real 0m6.643s 00:04:50.684 user 0m2.425s 00:04:50.684 sys 0m4.435s 00:04:50.684 16:43:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.684 ************************************ 00:04:50.684 END TEST hugepages 00:04:50.684 16:43:39 -- common/autotest_common.sh@10 -- # set +x 00:04:50.684 ************************************ 00:04:50.944 16:43:39 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:50.944 16:43:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.944 16:43:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.944 16:43:39 -- common/autotest_common.sh@10 -- # set +x 00:04:50.944 ************************************ 00:04:50.944 START TEST driver 00:04:50.944 ************************************ 00:04:50.944 16:43:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:50.944 * Looking for test storage... 00:04:50.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:50.944 16:43:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:50.944 16:43:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:50.944 16:43:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:50.945 16:43:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:50.945 16:43:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:50.945 16:43:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:50.945 16:43:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:50.945 16:43:39 -- scripts/common.sh@335 -- # IFS=.-: 00:04:50.945 16:43:39 -- scripts/common.sh@335 -- # read -ra ver1 00:04:50.945 16:43:39 -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.945 16:43:39 -- scripts/common.sh@336 -- # read -ra ver2 00:04:50.945 16:43:39 -- scripts/common.sh@337 -- # local 'op=<' 00:04:50.945 16:43:39 -- scripts/common.sh@339 -- # ver1_l=2 00:04:50.945 16:43:39 -- scripts/common.sh@340 -- # ver2_l=1 00:04:50.945 16:43:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:50.945 16:43:39 -- scripts/common.sh@343 -- # case "$op" in 00:04:50.945 16:43:39 -- scripts/common.sh@344 -- # : 1 00:04:50.945 16:43:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:50.945 16:43:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.945 16:43:39 -- scripts/common.sh@364 -- # decimal 1 00:04:50.945 16:43:39 -- scripts/common.sh@352 -- # local d=1 00:04:50.945 16:43:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.945 16:43:39 -- scripts/common.sh@354 -- # echo 1 00:04:50.945 16:43:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:50.945 16:43:39 -- scripts/common.sh@365 -- # decimal 2 00:04:50.945 16:43:39 -- scripts/common.sh@352 -- # local d=2 00:04:50.945 16:43:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.945 16:43:39 -- scripts/common.sh@354 -- # echo 2 00:04:50.945 16:43:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:50.945 16:43:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:50.945 16:43:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:50.945 16:43:39 -- scripts/common.sh@367 -- # return 0 00:04:50.945 16:43:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.945 16:43:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:50.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.945 --rc genhtml_branch_coverage=1 00:04:50.945 --rc genhtml_function_coverage=1 00:04:50.945 --rc genhtml_legend=1 00:04:50.945 --rc geninfo_all_blocks=1 00:04:50.945 --rc geninfo_unexecuted_blocks=1 00:04:50.945 00:04:50.945 ' 00:04:50.945 16:43:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:50.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.945 --rc genhtml_branch_coverage=1 00:04:50.945 --rc genhtml_function_coverage=1 00:04:50.945 --rc genhtml_legend=1 00:04:50.945 --rc geninfo_all_blocks=1 00:04:50.945 --rc geninfo_unexecuted_blocks=1 00:04:50.945 00:04:50.945 ' 00:04:50.945 16:43:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:50.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.945 --rc genhtml_branch_coverage=1 00:04:50.945 --rc genhtml_function_coverage=1 00:04:50.945 --rc genhtml_legend=1 00:04:50.945 --rc geninfo_all_blocks=1 00:04:50.945 --rc geninfo_unexecuted_blocks=1 00:04:50.945 00:04:50.945 ' 00:04:50.945 16:43:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:50.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.945 --rc genhtml_branch_coverage=1 00:04:50.945 --rc genhtml_function_coverage=1 00:04:50.945 --rc genhtml_legend=1 00:04:50.945 --rc geninfo_all_blocks=1 00:04:50.945 --rc geninfo_unexecuted_blocks=1 00:04:50.945 00:04:50.945 ' 00:04:50.945 16:43:39 -- setup/driver.sh@68 -- # setup reset 00:04:50.945 16:43:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.945 16:43:39 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.513 16:43:40 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:51.513 16:43:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.513 16:43:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.513 16:43:40 -- common/autotest_common.sh@10 -- # set +x 00:04:51.513 ************************************ 00:04:51.513 START TEST guess_driver 00:04:51.513 ************************************ 00:04:51.513 16:43:40 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:51.513 16:43:40 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:51.513 16:43:40 -- setup/driver.sh@47 -- # local fail=0 00:04:51.513 16:43:40 -- setup/driver.sh@49 -- # pick_driver 00:04:51.513 16:43:40 -- setup/driver.sh@36 -- # vfio 00:04:51.513 16:43:40 -- setup/driver.sh@21 -- # local iommu_grups 00:04:51.513 16:43:40 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:51.513 16:43:40 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:51.513 16:43:40 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:51.513 16:43:40 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:51.513 16:43:40 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:51.513 16:43:40 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:04:51.513 16:43:40 -- setup/driver.sh@32 -- # return 1 00:04:51.513 16:43:40 -- setup/driver.sh@38 -- # uio 00:04:51.513 16:43:40 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:51.513 16:43:40 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:51.513 16:43:40 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:51.513 16:43:40 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:51.513 16:43:40 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:04:51.513 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:04:51.514 16:43:40 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:51.514 Looking for driver=uio_pci_generic 00:04:51.514 16:43:40 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:51.514 16:43:40 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:51.514 16:43:40 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:51.514 16:43:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.514 16:43:40 -- setup/driver.sh@45 -- # setup output config 00:04:51.514 16:43:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.514 16:43:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:51.772 16:43:40 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:51.772 16:43:40 -- setup/driver.sh@58 -- # continue 00:04:51.772 16:43:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:52.031 16:43:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:52.031 16:43:40 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:52.031 16:43:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:52.967 16:43:41 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:52.967 16:43:41 -- setup/driver.sh@65 -- # setup reset 00:04:52.967 16:43:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.967 16:43:41 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:53.533 00:04:53.533 real 0m2.096s 00:04:53.533 user 0m0.434s 00:04:53.533 sys 0m1.658s 00:04:53.533 16:43:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.533 16:43:42 -- common/autotest_common.sh@10 -- # set +x 00:04:53.533 ************************************ 00:04:53.533 END TEST guess_driver 00:04:53.533 ************************************ 00:04:53.533 00:04:53.533 real 0m2.767s 00:04:53.533 user 0m0.794s 00:04:53.533 sys 0m1.993s 00:04:53.533 16:43:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.533 16:43:42 -- common/autotest_common.sh@10 -- # set +x 00:04:53.533 ************************************ 00:04:53.533 END TEST driver 00:04:53.533 ************************************ 00:04:53.533 16:43:42 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:53.533 16:43:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.533 16:43:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.533 16:43:42 -- common/autotest_common.sh@10 -- # set +x 00:04:53.533 ************************************ 00:04:53.533 START TEST devices 00:04:53.533 ************************************ 00:04:53.533 16:43:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:53.791 * Looking for test storage... 00:04:53.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:53.791 16:43:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:53.791 16:43:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:53.791 16:43:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:53.791 16:43:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:53.791 16:43:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:53.791 16:43:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:53.791 16:43:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:53.791 16:43:42 -- scripts/common.sh@335 -- # IFS=.-: 00:04:53.791 16:43:42 -- scripts/common.sh@335 -- # read -ra ver1 00:04:53.791 16:43:42 -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.791 16:43:42 -- scripts/common.sh@336 -- # read -ra ver2 00:04:53.791 16:43:42 -- scripts/common.sh@337 -- # local 'op=<' 00:04:53.791 16:43:42 -- scripts/common.sh@339 -- # ver1_l=2 00:04:53.791 16:43:42 -- scripts/common.sh@340 -- # ver2_l=1 00:04:53.791 16:43:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:53.791 16:43:42 -- scripts/common.sh@343 -- # case "$op" in 00:04:53.791 16:43:42 -- scripts/common.sh@344 -- # : 1 00:04:53.791 16:43:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:53.791 16:43:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.791 16:43:42 -- scripts/common.sh@364 -- # decimal 1 00:04:53.791 16:43:42 -- scripts/common.sh@352 -- # local d=1 00:04:53.791 16:43:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.791 16:43:42 -- scripts/common.sh@354 -- # echo 1 00:04:53.791 16:43:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:53.791 16:43:42 -- scripts/common.sh@365 -- # decimal 2 00:04:53.791 16:43:42 -- scripts/common.sh@352 -- # local d=2 00:04:53.791 16:43:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.791 16:43:42 -- scripts/common.sh@354 -- # echo 2 00:04:53.791 16:43:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:53.791 16:43:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:53.791 16:43:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:53.792 16:43:42 -- scripts/common.sh@367 -- # return 0 00:04:53.792 16:43:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.792 16:43:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:53.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.792 --rc genhtml_branch_coverage=1 00:04:53.792 --rc genhtml_function_coverage=1 00:04:53.792 --rc genhtml_legend=1 00:04:53.792 --rc geninfo_all_blocks=1 00:04:53.792 --rc geninfo_unexecuted_blocks=1 00:04:53.792 00:04:53.792 ' 00:04:53.792 16:43:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:53.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.792 --rc genhtml_branch_coverage=1 00:04:53.792 --rc genhtml_function_coverage=1 00:04:53.792 --rc genhtml_legend=1 00:04:53.792 --rc geninfo_all_blocks=1 00:04:53.792 --rc geninfo_unexecuted_blocks=1 00:04:53.792 00:04:53.792 ' 00:04:53.792 16:43:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:53.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.792 --rc genhtml_branch_coverage=1 00:04:53.792 --rc genhtml_function_coverage=1 00:04:53.792 --rc genhtml_legend=1 00:04:53.792 --rc geninfo_all_blocks=1 00:04:53.792 --rc geninfo_unexecuted_blocks=1 00:04:53.792 00:04:53.792 ' 00:04:53.792 16:43:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:53.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.792 --rc genhtml_branch_coverage=1 00:04:53.792 --rc genhtml_function_coverage=1 00:04:53.792 --rc genhtml_legend=1 00:04:53.792 --rc geninfo_all_blocks=1 00:04:53.792 --rc geninfo_unexecuted_blocks=1 00:04:53.792 00:04:53.792 ' 00:04:53.792 16:43:42 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:53.792 16:43:42 -- setup/devices.sh@192 -- # setup reset 00:04:53.792 16:43:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:53.792 16:43:42 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:54.359 16:43:43 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:54.359 16:43:43 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:54.359 16:43:43 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:54.359 16:43:43 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:54.359 16:43:43 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:54.359 16:43:43 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:54.359 16:43:43 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:54.359 16:43:43 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:54.359 16:43:43 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:54.359 16:43:43 -- setup/devices.sh@196 -- # blocks=() 00:04:54.359 16:43:43 -- setup/devices.sh@196 -- # declare -a blocks 00:04:54.359 16:43:43 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:54.359 16:43:43 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:54.359 16:43:43 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:54.359 16:43:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:54.359 16:43:43 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:54.359 16:43:43 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:54.359 16:43:43 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:54.359 16:43:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:54.359 16:43:43 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:54.359 16:43:43 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:54.359 16:43:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:54.359 No valid GPT data, bailing 00:04:54.359 16:43:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:54.359 16:43:43 -- scripts/common.sh@393 -- # pt= 00:04:54.359 16:43:43 -- scripts/common.sh@394 -- # return 1 00:04:54.359 16:43:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:54.359 16:43:43 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:54.359 16:43:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:54.359 16:43:43 -- setup/common.sh@80 -- # echo 5368709120 00:04:54.359 16:43:43 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:54.359 16:43:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:54.359 16:43:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:54.359 16:43:43 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:54.359 16:43:43 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:54.359 16:43:43 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:54.359 16:43:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.359 16:43:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.359 16:43:43 -- common/autotest_common.sh@10 -- # set +x 00:04:54.359 ************************************ 00:04:54.359 START TEST nvme_mount 00:04:54.359 ************************************ 00:04:54.359 16:43:43 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:54.359 16:43:43 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:54.359 16:43:43 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:54.359 16:43:43 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.359 16:43:43 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.359 16:43:43 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:54.359 16:43:43 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:54.359 16:43:43 -- setup/common.sh@40 -- # local part_no=1 00:04:54.359 16:43:43 -- setup/common.sh@41 -- # local size=1073741824 00:04:54.359 16:43:43 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:54.359 16:43:43 -- setup/common.sh@44 -- # parts=() 00:04:54.359 16:43:43 -- setup/common.sh@44 -- # local parts 00:04:54.359 16:43:43 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:54.359 16:43:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.359 16:43:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:54.359 16:43:43 -- setup/common.sh@46 -- # (( part++ )) 00:04:54.359 16:43:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.359 16:43:43 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:54.359 16:43:43 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:54.359 16:43:43 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:55.294 Creating new GPT entries in memory. 00:04:55.294 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:55.294 other utilities. 00:04:55.294 16:43:44 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:55.294 16:43:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:55.294 16:43:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:55.294 16:43:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:55.294 16:43:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:56.700 Creating new GPT entries in memory. 00:04:56.700 The operation has completed successfully. 00:04:56.700 16:43:45 -- setup/common.sh@57 -- # (( part++ )) 00:04:56.700 16:43:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.700 16:43:45 -- setup/common.sh@62 -- # wait 96676 00:04:56.700 16:43:45 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.700 16:43:45 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:56.700 16:43:45 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.700 16:43:45 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:56.700 16:43:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:56.700 16:43:45 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.700 16:43:45 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:56.700 16:43:45 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:56.700 16:43:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:56.700 16:43:45 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.700 16:43:45 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:56.700 16:43:45 -- setup/devices.sh@53 -- # local found=0 00:04:56.700 16:43:45 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.700 16:43:45 -- setup/devices.sh@56 -- # : 00:04:56.700 16:43:45 -- setup/devices.sh@59 -- # local pci status 00:04:56.700 16:43:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.700 16:43:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:56.700 16:43:45 -- setup/devices.sh@47 -- # setup output config 00:04:56.700 16:43:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.700 16:43:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:56.700 16:43:45 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:56.700 16:43:45 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:56.700 16:43:45 -- setup/devices.sh@63 -- # found=1 00:04:56.700 16:43:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.700 16:43:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:56.700 16:43:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.700 16:43:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:56.700 16:43:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.077 16:43:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.077 16:43:46 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:58.077 16:43:46 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.077 16:43:46 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:58.077 16:43:46 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:58.077 16:43:46 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:58.077 16:43:46 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.077 16:43:46 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.077 16:43:46 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:58.077 16:43:46 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:58.077 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:58.077 16:43:46 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:58.077 16:43:46 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:58.077 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:58.077 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:58.077 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:58.077 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:58.077 16:43:46 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:58.078 16:43:46 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:58.078 16:43:46 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.078 16:43:46 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:58.078 16:43:46 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:58.078 16:43:46 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.078 16:43:46 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:58.078 16:43:46 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:58.078 16:43:46 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:58.078 16:43:46 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.078 16:43:46 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:58.078 16:43:46 -- setup/devices.sh@53 -- # local found=0 00:04:58.078 16:43:46 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:58.078 16:43:46 -- setup/devices.sh@56 -- # : 00:04:58.078 16:43:46 -- setup/devices.sh@59 -- # local pci status 00:04:58.078 16:43:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.078 16:43:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:58.078 16:43:46 -- setup/devices.sh@47 -- # setup output config 00:04:58.078 16:43:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.078 16:43:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:58.337 16:43:46 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:58.337 16:43:46 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:58.337 16:43:46 -- setup/devices.sh@63 -- # found=1 00:04:58.337 16:43:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.337 16:43:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:58.337 16:43:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.337 16:43:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:58.337 16:43:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.240 16:43:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.240 16:43:48 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:00.240 16:43:48 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.240 16:43:48 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:00.240 16:43:48 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:00.240 16:43:48 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.240 16:43:48 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:00.240 16:43:48 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:00.240 16:43:48 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:00.240 16:43:48 -- setup/devices.sh@50 -- # local mount_point= 00:05:00.240 16:43:48 -- setup/devices.sh@51 -- # local test_file= 00:05:00.240 16:43:48 -- setup/devices.sh@53 -- # local found=0 00:05:00.240 16:43:48 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:00.240 16:43:48 -- setup/devices.sh@59 -- # local pci status 00:05:00.240 16:43:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.240 16:43:48 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:00.240 16:43:48 -- setup/devices.sh@47 -- # setup output config 00:05:00.240 16:43:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.240 16:43:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.240 16:43:48 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.240 16:43:48 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:00.240 16:43:48 -- setup/devices.sh@63 -- # found=1 00:05:00.240 16:43:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.240 16:43:48 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.240 16:43:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.240 16:43:49 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.240 16:43:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.618 16:43:50 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.618 16:43:50 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:01.618 16:43:50 -- setup/devices.sh@68 -- # return 0 00:05:01.618 16:43:50 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:01.618 16:43:50 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.618 16:43:50 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.618 16:43:50 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.618 16:43:50 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.618 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.618 00:05:01.618 real 0m7.054s 00:05:01.618 user 0m0.746s 00:05:01.618 sys 0m4.341s 00:05:01.618 16:43:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:01.618 16:43:50 -- common/autotest_common.sh@10 -- # set +x 00:05:01.618 ************************************ 00:05:01.618 END TEST nvme_mount 00:05:01.618 ************************************ 00:05:01.618 16:43:50 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:01.618 16:43:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.618 16:43:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.618 16:43:50 -- common/autotest_common.sh@10 -- # set +x 00:05:01.618 ************************************ 00:05:01.618 START TEST dm_mount 00:05:01.618 ************************************ 00:05:01.618 16:43:50 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:01.618 16:43:50 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:01.618 16:43:50 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:01.618 16:43:50 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:01.618 16:43:50 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:01.618 16:43:50 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:01.618 16:43:50 -- setup/common.sh@40 -- # local part_no=2 00:05:01.618 16:43:50 -- setup/common.sh@41 -- # local size=1073741824 00:05:01.618 16:43:50 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:01.618 16:43:50 -- setup/common.sh@44 -- # parts=() 00:05:01.618 16:43:50 -- setup/common.sh@44 -- # local parts 00:05:01.618 16:43:50 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:01.618 16:43:50 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.618 16:43:50 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:01.618 16:43:50 -- setup/common.sh@46 -- # (( part++ )) 00:05:01.618 16:43:50 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.618 16:43:50 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:01.618 16:43:50 -- setup/common.sh@46 -- # (( part++ )) 00:05:01.618 16:43:50 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.618 16:43:50 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:01.618 16:43:50 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:01.618 16:43:50 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:02.555 Creating new GPT entries in memory. 00:05:02.555 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:02.555 other utilities. 00:05:02.555 16:43:51 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:02.555 16:43:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.555 16:43:51 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:02.556 16:43:51 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:02.556 16:43:51 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:03.491 Creating new GPT entries in memory. 00:05:03.491 The operation has completed successfully. 00:05:03.491 16:43:52 -- setup/common.sh@57 -- # (( part++ )) 00:05:03.491 16:43:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.491 16:43:52 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:03.491 16:43:52 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:03.491 16:43:52 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:04.869 The operation has completed successfully. 00:05:04.869 16:43:53 -- setup/common.sh@57 -- # (( part++ )) 00:05:04.869 16:43:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.869 16:43:53 -- setup/common.sh@62 -- # wait 97174 00:05:04.869 16:43:53 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:04.869 16:43:53 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:04.869 16:43:53 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:04.869 16:43:53 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:04.869 16:43:53 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:04.869 16:43:53 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:04.869 16:43:53 -- setup/devices.sh@161 -- # break 00:05:04.869 16:43:53 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:04.869 16:43:53 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:04.869 16:43:53 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:04.869 16:43:53 -- setup/devices.sh@166 -- # dm=dm-0 00:05:04.869 16:43:53 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:04.869 16:43:53 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:04.869 16:43:53 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:04.869 16:43:53 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:04.869 16:43:53 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:04.869 16:43:53 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:04.869 16:43:53 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:04.869 16:43:53 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:04.869 16:43:53 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:04.869 16:43:53 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:04.869 16:43:53 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:04.869 16:43:53 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:04.869 16:43:53 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:04.869 16:43:53 -- setup/devices.sh@53 -- # local found=0 00:05:04.869 16:43:53 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:04.869 16:43:53 -- setup/devices.sh@56 -- # : 00:05:04.869 16:43:53 -- setup/devices.sh@59 -- # local pci status 00:05:04.869 16:43:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.869 16:43:53 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:04.869 16:43:53 -- setup/devices.sh@47 -- # setup output config 00:05:04.869 16:43:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.869 16:43:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:04.869 16:43:53 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.869 16:43:53 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:04.869 16:43:53 -- setup/devices.sh@63 -- # found=1 00:05:04.869 16:43:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.869 16:43:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.869 16:43:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.129 16:43:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:05.129 16:43:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.065 16:43:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.065 16:43:54 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:06.065 16:43:54 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.065 16:43:54 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:06.065 16:43:54 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:06.065 16:43:54 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.065 16:43:54 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:06.065 16:43:54 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:06.065 16:43:54 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:06.065 16:43:54 -- setup/devices.sh@50 -- # local mount_point= 00:05:06.065 16:43:54 -- setup/devices.sh@51 -- # local test_file= 00:05:06.065 16:43:54 -- setup/devices.sh@53 -- # local found=0 00:05:06.065 16:43:54 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:06.065 16:43:54 -- setup/devices.sh@59 -- # local pci status 00:05:06.065 16:43:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.065 16:43:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:06.065 16:43:54 -- setup/devices.sh@47 -- # setup output config 00:05:06.065 16:43:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.065 16:43:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:06.324 16:43:55 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:06.324 16:43:55 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:06.324 16:43:55 -- setup/devices.sh@63 -- # found=1 00:05:06.324 16:43:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.324 16:43:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:06.324 16:43:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.324 16:43:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:06.324 16:43:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.701 16:43:56 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.701 16:43:56 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:07.701 16:43:56 -- setup/devices.sh@68 -- # return 0 00:05:07.701 16:43:56 -- setup/devices.sh@187 -- # cleanup_dm 00:05:07.701 16:43:56 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:07.701 16:43:56 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:07.701 16:43:56 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:07.701 16:43:56 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.701 16:43:56 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:07.701 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:07.701 16:43:56 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:07.701 16:43:56 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:07.701 ************************************ 00:05:07.701 END TEST dm_mount 00:05:07.701 00:05:07.701 real 0m6.079s 00:05:07.701 user 0m0.434s 00:05:07.701 sys 0m2.491s 00:05:07.701 16:43:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.701 16:43:56 -- common/autotest_common.sh@10 -- # set +x 00:05:07.701 ************************************ 00:05:07.701 16:43:56 -- setup/devices.sh@1 -- # cleanup 00:05:07.701 16:43:56 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:07.701 16:43:56 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:07.701 16:43:56 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.701 16:43:56 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:07.701 16:43:56 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:07.701 16:43:56 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:07.701 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:07.701 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:07.701 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:07.701 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:07.701 16:43:56 -- setup/devices.sh@12 -- # cleanup_dm 00:05:07.701 16:43:56 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:07.701 16:43:56 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:07.701 16:43:56 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.701 16:43:56 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:07.701 16:43:56 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:07.701 16:43:56 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:07.701 ************************************ 00:05:07.701 END TEST devices 00:05:07.701 ************************************ 00:05:07.701 00:05:07.701 real 0m14.008s 00:05:07.701 user 0m1.652s 00:05:07.701 sys 0m7.214s 00:05:07.701 16:43:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.701 16:43:56 -- common/autotest_common.sh@10 -- # set +x 00:05:07.701 00:05:07.701 real 0m28.605s 00:05:07.701 user 0m6.626s 00:05:07.701 sys 0m17.186s 00:05:07.701 16:43:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.701 16:43:56 -- common/autotest_common.sh@10 -- # set +x 00:05:07.701 ************************************ 00:05:07.701 END TEST setup.sh 00:05:07.701 ************************************ 00:05:07.701 16:43:56 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:07.701 Hugepages 00:05:07.701 node hugesize free / total 00:05:07.960 node0 1048576kB 0 / 0 00:05:07.960 node0 2048kB 2048 / 2048 00:05:07.960 00:05:07.960 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:07.960 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:07.960 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:07.960 16:43:56 -- spdk/autotest.sh@128 -- # uname -s 00:05:07.960 16:43:56 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:07.960 16:43:56 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:07.960 16:43:56 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:08.527 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:08.527 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.466 16:43:58 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:10.843 16:43:59 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:10.843 16:43:59 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:10.843 16:43:59 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:10.843 16:43:59 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:10.843 16:43:59 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:10.843 16:43:59 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:10.843 16:43:59 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:10.843 16:43:59 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:10.843 16:43:59 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:10.843 16:43:59 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:10.843 16:43:59 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:05:10.843 16:43:59 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.843 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:10.843 Waiting for block devices as requested 00:05:11.103 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:11.103 16:43:59 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:11.103 16:43:59 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:11.103 16:43:59 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:11.103 16:43:59 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:05:11.103 16:43:59 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:11.103 16:43:59 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:11.103 16:43:59 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:11.103 16:43:59 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:11.103 16:43:59 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:11.103 16:43:59 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:11.103 16:43:59 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:11.103 16:43:59 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:11.103 16:43:59 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:11.103 16:43:59 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:11.103 16:43:59 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:11.103 16:43:59 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:11.103 16:43:59 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:11.103 16:43:59 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:11.103 16:43:59 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:11.103 16:43:59 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:11.103 16:43:59 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:11.103 16:43:59 -- common/autotest_common.sh@1552 -- # continue 00:05:11.103 16:43:59 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:11.103 16:43:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:11.103 16:43:59 -- common/autotest_common.sh@10 -- # set +x 00:05:11.103 16:43:59 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:11.103 16:43:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:11.103 16:43:59 -- common/autotest_common.sh@10 -- # set +x 00:05:11.103 16:43:59 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:11.362 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:11.620 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:12.999 16:44:01 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:12.999 16:44:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.999 16:44:01 -- common/autotest_common.sh@10 -- # set +x 00:05:12.999 16:44:01 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:12.999 16:44:01 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:12.999 16:44:01 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:12.999 16:44:01 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:12.999 16:44:01 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:12.999 16:44:01 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:12.999 16:44:01 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:12.999 16:44:01 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:12.999 16:44:01 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.999 16:44:01 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:12.999 16:44:01 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:12.999 16:44:01 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:12.999 16:44:01 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:05:12.999 16:44:01 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:12.999 16:44:01 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:12.999 16:44:01 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:12.999 16:44:01 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:12.999 16:44:01 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:12.999 16:44:01 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:12.999 16:44:01 -- common/autotest_common.sh@1588 -- # return 0 00:05:12.999 16:44:01 -- spdk/autotest.sh@148 -- # '[' 1 -eq 1 ']' 00:05:12.999 16:44:01 -- spdk/autotest.sh@149 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:12.999 16:44:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.999 16:44:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.999 16:44:01 -- common/autotest_common.sh@10 -- # set +x 00:05:12.999 ************************************ 00:05:12.999 START TEST unittest 00:05:12.999 ************************************ 00:05:12.999 16:44:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:12.999 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:12.999 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:12.999 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:12.999 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:12.999 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:12.999 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:12.999 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:12.999 ++ rpc_py=rpc_cmd 00:05:12.999 ++ set -e 00:05:12.999 ++ shopt -s nullglob 00:05:12.999 ++ shopt -s extglob 00:05:12.999 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:12.999 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:12.999 +++ CONFIG_WPDK_DIR= 00:05:12.999 +++ CONFIG_ASAN=y 00:05:12.999 +++ CONFIG_VBDEV_COMPRESS=n 00:05:12.999 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:12.999 +++ CONFIG_USDT=n 00:05:12.999 +++ CONFIG_CUSTOMOCF=n 00:05:12.999 +++ CONFIG_PREFIX=/usr/local 00:05:12.999 +++ CONFIG_RBD=n 00:05:12.999 +++ CONFIG_LIBDIR= 00:05:12.999 +++ CONFIG_IDXD=y 00:05:12.999 +++ CONFIG_NVME_CUSE=y 00:05:12.999 +++ CONFIG_SMA=n 00:05:12.999 +++ CONFIG_VTUNE=n 00:05:12.999 +++ CONFIG_TSAN=n 00:05:12.999 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:12.999 +++ CONFIG_VFIO_USER_DIR= 00:05:12.999 +++ CONFIG_PGO_CAPTURE=n 00:05:12.999 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:12.999 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:12.999 +++ CONFIG_LTO=n 00:05:12.999 +++ CONFIG_ISCSI_INITIATOR=y 00:05:12.999 +++ CONFIG_CET=n 00:05:12.999 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:12.999 +++ CONFIG_OCF_PATH= 00:05:12.999 +++ CONFIG_RDMA_SET_TOS=y 00:05:12.999 +++ CONFIG_HAVE_ARC4RANDOM=n 00:05:12.999 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:12.999 +++ CONFIG_UBLK=n 00:05:12.999 +++ CONFIG_ISAL_CRYPTO=y 00:05:12.999 +++ CONFIG_OPENSSL_PATH= 00:05:12.999 +++ CONFIG_OCF=n 00:05:12.999 +++ CONFIG_FUSE=n 00:05:12.999 +++ CONFIG_VTUNE_DIR= 00:05:12.999 +++ CONFIG_FUZZER_LIB= 00:05:12.999 +++ CONFIG_FUZZER=n 00:05:12.999 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:12.999 +++ CONFIG_CRYPTO=n 00:05:12.999 +++ CONFIG_PGO_USE=n 00:05:12.999 +++ CONFIG_VHOST=y 00:05:12.999 +++ CONFIG_DAOS=n 00:05:12.999 +++ CONFIG_DPDK_INC_DIR= 00:05:12.999 +++ CONFIG_DAOS_DIR= 00:05:12.999 +++ CONFIG_UNIT_TESTS=y 00:05:12.999 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:12.999 +++ CONFIG_VIRTIO=y 00:05:12.999 +++ CONFIG_COVERAGE=y 00:05:12.999 +++ CONFIG_RDMA=y 00:05:12.999 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:12.999 +++ CONFIG_URING_PATH= 00:05:12.999 +++ CONFIG_XNVME=n 00:05:12.999 +++ CONFIG_VFIO_USER=n 00:05:12.999 +++ CONFIG_ARCH=native 00:05:12.999 +++ CONFIG_URING_ZNS=n 00:05:12.999 +++ CONFIG_WERROR=y 00:05:12.999 +++ CONFIG_HAVE_LIBBSD=n 00:05:12.999 +++ CONFIG_UBSAN=y 00:05:12.999 +++ CONFIG_IPSEC_MB_DIR= 00:05:12.999 +++ CONFIG_GOLANG=n 00:05:12.999 +++ CONFIG_ISAL=y 00:05:12.999 +++ CONFIG_IDXD_KERNEL=n 00:05:12.999 +++ CONFIG_DPDK_LIB_DIR= 00:05:12.999 +++ CONFIG_RDMA_PROV=verbs 00:05:12.999 +++ CONFIG_APPS=y 00:05:12.999 +++ CONFIG_SHARED=n 00:05:12.999 +++ CONFIG_FC_PATH= 00:05:12.999 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:12.999 +++ CONFIG_FC=n 00:05:12.999 +++ CONFIG_AVAHI=n 00:05:12.999 +++ CONFIG_FIO_PLUGIN=y 00:05:12.999 +++ CONFIG_RAID5F=y 00:05:12.999 +++ CONFIG_EXAMPLES=y 00:05:12.999 +++ CONFIG_TESTS=y 00:05:12.999 +++ CONFIG_CRYPTO_MLX5=n 00:05:12.999 +++ CONFIG_MAX_LCORES= 00:05:12.999 +++ CONFIG_IPSEC_MB=n 00:05:12.999 +++ CONFIG_DEBUG=y 00:05:12.999 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:12.999 +++ CONFIG_CROSS_PREFIX= 00:05:12.999 +++ CONFIG_URING=n 00:05:12.999 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:12.999 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:12.999 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:12.999 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:12.999 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:12.999 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:12.999 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:12.999 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:12.999 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:12.999 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:12.999 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:12.999 +++ VHOST_APP=("$_app_dir/vhost") 00:05:12.999 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:12.999 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:12.999 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:12.999 +++ [[ #ifndef SPDK_CONFIG_H 00:05:12.999 #define SPDK_CONFIG_H 00:05:12.999 #define SPDK_CONFIG_APPS 1 00:05:12.999 #define SPDK_CONFIG_ARCH native 00:05:12.999 #define SPDK_CONFIG_ASAN 1 00:05:12.999 #undef SPDK_CONFIG_AVAHI 00:05:12.999 #undef SPDK_CONFIG_CET 00:05:12.999 #define SPDK_CONFIG_COVERAGE 1 00:05:12.999 #define SPDK_CONFIG_CROSS_PREFIX 00:05:12.999 #undef SPDK_CONFIG_CRYPTO 00:05:12.999 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:12.999 #undef SPDK_CONFIG_CUSTOMOCF 00:05:12.999 #undef SPDK_CONFIG_DAOS 00:05:12.999 #define SPDK_CONFIG_DAOS_DIR 00:05:12.999 #define SPDK_CONFIG_DEBUG 1 00:05:12.999 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:12.999 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:12.999 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:12.999 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:12.999 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:12.999 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:12.999 #define SPDK_CONFIG_EXAMPLES 1 00:05:12.999 #undef SPDK_CONFIG_FC 00:05:12.999 #define SPDK_CONFIG_FC_PATH 00:05:12.999 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:12.999 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:12.999 #undef SPDK_CONFIG_FUSE 00:05:12.999 #undef SPDK_CONFIG_FUZZER 00:05:12.999 #define SPDK_CONFIG_FUZZER_LIB 00:05:12.999 #undef SPDK_CONFIG_GOLANG 00:05:12.999 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:05:12.999 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:12.999 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:12.999 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:12.999 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:12.999 #define SPDK_CONFIG_IDXD 1 00:05:12.999 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:12.999 #undef SPDK_CONFIG_IPSEC_MB 00:05:12.999 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:12.999 #define SPDK_CONFIG_ISAL 1 00:05:12.999 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:12.999 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:12.999 #define SPDK_CONFIG_LIBDIR 00:05:12.999 #undef SPDK_CONFIG_LTO 00:05:12.999 #define SPDK_CONFIG_MAX_LCORES 00:05:12.999 #define SPDK_CONFIG_NVME_CUSE 1 00:05:12.999 #undef SPDK_CONFIG_OCF 00:05:12.999 #define SPDK_CONFIG_OCF_PATH 00:05:12.999 #define SPDK_CONFIG_OPENSSL_PATH 00:05:12.999 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:12.999 #undef SPDK_CONFIG_PGO_USE 00:05:12.999 #define SPDK_CONFIG_PREFIX /usr/local 00:05:12.999 #define SPDK_CONFIG_RAID5F 1 00:05:12.999 #undef SPDK_CONFIG_RBD 00:05:12.999 #define SPDK_CONFIG_RDMA 1 00:05:12.999 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:12.999 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:12.999 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:12.999 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:12.999 #undef SPDK_CONFIG_SHARED 00:05:12.999 #undef SPDK_CONFIG_SMA 00:05:13.000 #define SPDK_CONFIG_TESTS 1 00:05:13.000 #undef SPDK_CONFIG_TSAN 00:05:13.000 #undef SPDK_CONFIG_UBLK 00:05:13.000 #define SPDK_CONFIG_UBSAN 1 00:05:13.000 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:13.000 #undef SPDK_CONFIG_URING 00:05:13.000 #define SPDK_CONFIG_URING_PATH 00:05:13.000 #undef SPDK_CONFIG_URING_ZNS 00:05:13.000 #undef SPDK_CONFIG_USDT 00:05:13.000 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:13.000 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:13.000 #undef SPDK_CONFIG_VFIO_USER 00:05:13.000 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:13.000 #define SPDK_CONFIG_VHOST 1 00:05:13.000 #define SPDK_CONFIG_VIRTIO 1 00:05:13.000 #undef SPDK_CONFIG_VTUNE 00:05:13.000 #define SPDK_CONFIG_VTUNE_DIR 00:05:13.000 #define SPDK_CONFIG_WERROR 1 00:05:13.000 #define SPDK_CONFIG_WPDK_DIR 00:05:13.000 #undef SPDK_CONFIG_XNVME 00:05:13.000 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:13.000 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:13.000 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:13.000 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:13.000 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.000 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.000 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:13.000 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:13.000 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:13.000 ++++ export PATH 00:05:13.000 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:13.000 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:13.000 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:13.000 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:13.000 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:13.000 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:13.000 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:13.000 +++ TEST_TAG=N/A 00:05:13.000 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:13.000 ++ : 1 00:05:13.000 ++ export RUN_NIGHTLY 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_RUN_VALGRIND 00:05:13.000 ++ : 1 00:05:13.000 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:13.000 ++ : 1 00:05:13.000 ++ export SPDK_TEST_UNITTEST 00:05:13.000 ++ : 00:05:13.000 ++ export SPDK_TEST_AUTOBUILD 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_RELEASE_BUILD 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_ISAL 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_ISCSI 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:13.000 ++ : 1 00:05:13.000 ++ export SPDK_TEST_NVME 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_NVME_PMR 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_NVME_BP 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_NVME_CLI 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_NVME_CUSE 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_NVME_FDP 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_NVMF 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_VFIOUSER 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_FUZZER 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_FUZZER_SHORT 00:05:13.000 ++ : rdma 00:05:13.000 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_RBD 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_VHOST 00:05:13.000 ++ : 1 00:05:13.000 ++ export SPDK_TEST_BLOCKDEV 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_IOAT 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_BLOBFS 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_VHOST_INIT 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_LVOL 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:13.000 ++ : 1 00:05:13.000 ++ export SPDK_RUN_ASAN 00:05:13.000 ++ : 1 00:05:13.000 ++ export SPDK_RUN_UBSAN 00:05:13.000 ++ : 00:05:13.000 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_RUN_NON_ROOT 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_CRYPTO 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_FTL 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_OCF 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_VMD 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_OPAL 00:05:13.000 ++ : 00:05:13.000 ++ export SPDK_TEST_NATIVE_DPDK 00:05:13.000 ++ : true 00:05:13.000 ++ export SPDK_AUTOTEST_X 00:05:13.000 ++ : 1 00:05:13.000 ++ export SPDK_TEST_RAID5 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_URING 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_USDT 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_USE_IGB_UIO 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_SCHEDULER 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_SCANBUILD 00:05:13.000 ++ : 00:05:13.000 ++ export SPDK_TEST_NVMF_NICS 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_SMA 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_DAOS 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_XNVME 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_ACCEL_DSA 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_ACCEL_IAA 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_ACCEL_IOAT 00:05:13.000 ++ : 00:05:13.000 ++ export SPDK_TEST_FUZZER_TARGET 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_TEST_NVMF_MDNS 00:05:13.000 ++ : 0 00:05:13.000 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:13.000 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:13.000 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:13.000 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:13.000 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:13.000 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:13.000 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:13.000 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:13.000 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:13.000 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:13.000 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:13.000 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:13.000 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:13.000 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:13.000 ++ PYTHONDONTWRITEBYTECODE=1 00:05:13.000 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:13.000 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:13.000 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:13.000 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:13.000 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:13.000 ++ rm -rf /var/tmp/asan_suppression_file 00:05:13.000 ++ cat 00:05:13.000 ++ echo leak:libfuse3.so 00:05:13.000 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:13.000 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:13.000 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:13.000 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:13.000 ++ '[' -z /var/spdk/dependencies ']' 00:05:13.000 ++ export DEPENDENCY_DIR 00:05:13.000 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:13.000 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:13.000 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:13.000 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:13.000 ++ export QEMU_BIN= 00:05:13.000 ++ QEMU_BIN= 00:05:13.000 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:13.000 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:13.000 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:13.000 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:13.000 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:13.000 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:13.000 ++ _LCOV_MAIN=0 00:05:13.000 ++ _LCOV_LLVM=1 00:05:13.000 ++ _LCOV= 00:05:13.000 ++ [[ '' == *clang* ]] 00:05:13.000 ++ [[ 0 -eq 1 ]] 00:05:13.000 ++ _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:05:13.000 ++ _lcov_opt[_LCOV_MAIN]= 00:05:13.000 ++ lcov_opt= 00:05:13.000 ++ '[' 0 -eq 0 ']' 00:05:13.000 ++ export valgrind= 00:05:13.000 ++ valgrind= 00:05:13.000 +++ uname -s 00:05:13.000 ++ '[' Linux = Linux ']' 00:05:13.000 ++ HUGEMEM=4096 00:05:13.000 ++ export CLEAR_HUGE=yes 00:05:13.000 ++ CLEAR_HUGE=yes 00:05:13.000 ++ [[ 0 -eq 1 ]] 00:05:13.000 ++ [[ 0 -eq 1 ]] 00:05:13.000 ++ MAKE=make 00:05:13.000 +++ nproc 00:05:13.000 ++ MAKEFLAGS=-j10 00:05:13.000 ++ export HUGEMEM=4096 00:05:13.001 ++ HUGEMEM=4096 00:05:13.001 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:13.001 ++ NO_HUGE=() 00:05:13.001 ++ TEST_MODE= 00:05:13.001 ++ [[ -z '' ]] 00:05:13.001 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:13.001 ++ exec 00:05:13.001 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:13.001 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:13.001 ++ set_test_storage 2147483648 00:05:13.001 ++ [[ -v testdir ]] 00:05:13.001 ++ local requested_size=2147483648 00:05:13.001 ++ local mount target_dir 00:05:13.001 ++ local -A mounts fss sizes avails uses 00:05:13.001 ++ local source fs size avail mount use 00:05:13.001 ++ local storage_fallback storage_candidates 00:05:13.001 +++ mktemp -udt spdk.XXXXXX 00:05:13.001 ++ storage_fallback=/tmp/spdk.P2Vcso 00:05:13.001 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:13.001 ++ [[ -n '' ]] 00:05:13.001 ++ [[ -n '' ]] 00:05:13.001 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.P2Vcso/tests/unit /tmp/spdk.P2Vcso 00:05:13.001 ++ requested_size=2214592512 00:05:13.001 ++ read -r source fs size use avail _ mount 00:05:13.001 +++ df -T 00:05:13.001 +++ grep -v Filesystem 00:05:13.001 ++ mounts["$mount"]=tmpfs 00:05:13.001 ++ fss["$mount"]=tmpfs 00:05:13.001 ++ avails["$mount"]=1252601856 00:05:13.001 ++ sizes["$mount"]=1253683200 00:05:13.001 ++ uses["$mount"]=1081344 00:05:13.001 ++ read -r source fs size use avail _ mount 00:05:13.001 ++ mounts["$mount"]=/dev/vda1 00:05:13.001 ++ fss["$mount"]=ext4 00:05:13.001 ++ avails["$mount"]=10462924800 00:05:13.001 ++ sizes["$mount"]=20616794112 00:05:13.001 ++ uses["$mount"]=10137092096 00:05:13.001 ++ read -r source fs size use avail _ mount 00:05:13.001 ++ mounts["$mount"]=tmpfs 00:05:13.001 ++ fss["$mount"]=tmpfs 00:05:13.001 ++ avails["$mount"]=6268403712 00:05:13.001 ++ sizes["$mount"]=6268403712 00:05:13.001 ++ uses["$mount"]=0 00:05:13.001 ++ read -r source fs size use avail _ mount 00:05:13.001 ++ mounts["$mount"]=tmpfs 00:05:13.001 ++ fss["$mount"]=tmpfs 00:05:13.001 ++ avails["$mount"]=5242880 00:05:13.001 ++ sizes["$mount"]=5242880 00:05:13.001 ++ uses["$mount"]=0 00:05:13.001 ++ read -r source fs size use avail _ mount 00:05:13.001 ++ mounts["$mount"]=/dev/vda15 00:05:13.001 ++ fss["$mount"]=vfat 00:05:13.001 ++ avails["$mount"]=103061504 00:05:13.001 ++ sizes["$mount"]=109395968 00:05:13.001 ++ uses["$mount"]=6334464 00:05:13.001 ++ read -r source fs size use avail _ mount 00:05:13.001 ++ mounts["$mount"]=tmpfs 00:05:13.001 ++ fss["$mount"]=tmpfs 00:05:13.001 ++ avails["$mount"]=1253675008 00:05:13.001 ++ sizes["$mount"]=1253679104 00:05:13.001 ++ uses["$mount"]=4096 00:05:13.001 ++ read -r source fs size use avail _ mount 00:05:13.001 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output 00:05:13.001 ++ fss["$mount"]=fuse.sshfs 00:05:13.001 ++ avails["$mount"]=96488083456 00:05:13.001 ++ sizes["$mount"]=105088212992 00:05:13.001 ++ uses["$mount"]=3214696448 00:05:13.001 ++ read -r source fs size use avail _ mount 00:05:13.001 ++ printf '* Looking for test storage...\n' 00:05:13.001 * Looking for test storage... 00:05:13.001 ++ local target_space new_size 00:05:13.001 ++ for target_dir in "${storage_candidates[@]}" 00:05:13.001 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:13.001 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:13.001 ++ mount=/ 00:05:13.001 ++ target_space=10462924800 00:05:13.001 ++ (( target_space == 0 || target_space < requested_size )) 00:05:13.001 ++ (( target_space >= requested_size )) 00:05:13.001 ++ [[ ext4 == tmpfs ]] 00:05:13.001 ++ [[ ext4 == ramfs ]] 00:05:13.001 ++ [[ / == / ]] 00:05:13.001 ++ new_size=12351684608 00:05:13.001 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:13.001 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:13.001 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:13.001 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:13.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:13.001 ++ return 0 00:05:13.001 ++ set -o errtrace 00:05:13.001 ++ shopt -s extdebug 00:05:13.001 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:13.001 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:13.001 16:44:01 -- common/autotest_common.sh@1682 -- # true 00:05:13.001 16:44:01 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:05:13.001 16:44:01 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:13.001 16:44:01 -- common/autotest_common.sh@29 -- # exec 00:05:13.001 16:44:01 -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:13.001 16:44:01 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:13.001 16:44:01 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:13.001 16:44:01 -- common/autotest_common.sh@18 -- # set -x 00:05:13.001 16:44:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:13.001 16:44:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:13.001 16:44:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:13.001 16:44:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:13.001 16:44:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:13.001 16:44:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:13.001 16:44:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:13.001 16:44:01 -- scripts/common.sh@335 -- # IFS=.-: 00:05:13.001 16:44:01 -- scripts/common.sh@335 -- # read -ra ver1 00:05:13.001 16:44:01 -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.001 16:44:01 -- scripts/common.sh@336 -- # read -ra ver2 00:05:13.001 16:44:01 -- scripts/common.sh@337 -- # local 'op=<' 00:05:13.001 16:44:01 -- scripts/common.sh@339 -- # ver1_l=2 00:05:13.001 16:44:01 -- scripts/common.sh@340 -- # ver2_l=1 00:05:13.001 16:44:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:13.001 16:44:01 -- scripts/common.sh@343 -- # case "$op" in 00:05:13.001 16:44:01 -- scripts/common.sh@344 -- # : 1 00:05:13.001 16:44:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:13.001 16:44:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.001 16:44:01 -- scripts/common.sh@364 -- # decimal 1 00:05:13.001 16:44:01 -- scripts/common.sh@352 -- # local d=1 00:05:13.001 16:44:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.001 16:44:01 -- scripts/common.sh@354 -- # echo 1 00:05:13.001 16:44:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:13.001 16:44:01 -- scripts/common.sh@365 -- # decimal 2 00:05:13.001 16:44:01 -- scripts/common.sh@352 -- # local d=2 00:05:13.001 16:44:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.001 16:44:01 -- scripts/common.sh@354 -- # echo 2 00:05:13.001 16:44:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:13.001 16:44:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:13.001 16:44:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:13.001 16:44:01 -- scripts/common.sh@367 -- # return 0 00:05:13.001 16:44:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.001 16:44:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:13.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.001 --rc genhtml_branch_coverage=1 00:05:13.001 --rc genhtml_function_coverage=1 00:05:13.001 --rc genhtml_legend=1 00:05:13.001 --rc geninfo_all_blocks=1 00:05:13.001 --rc geninfo_unexecuted_blocks=1 00:05:13.001 00:05:13.001 ' 00:05:13.001 16:44:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:13.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.001 --rc genhtml_branch_coverage=1 00:05:13.001 --rc genhtml_function_coverage=1 00:05:13.001 --rc genhtml_legend=1 00:05:13.001 --rc geninfo_all_blocks=1 00:05:13.001 --rc geninfo_unexecuted_blocks=1 00:05:13.001 00:05:13.001 ' 00:05:13.001 16:44:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:13.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.001 --rc genhtml_branch_coverage=1 00:05:13.001 --rc genhtml_function_coverage=1 00:05:13.001 --rc genhtml_legend=1 00:05:13.001 --rc geninfo_all_blocks=1 00:05:13.001 --rc geninfo_unexecuted_blocks=1 00:05:13.001 00:05:13.001 ' 00:05:13.001 16:44:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:13.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.001 --rc genhtml_branch_coverage=1 00:05:13.001 --rc genhtml_function_coverage=1 00:05:13.001 --rc genhtml_legend=1 00:05:13.001 --rc geninfo_all_blocks=1 00:05:13.001 --rc geninfo_unexecuted_blocks=1 00:05:13.001 00:05:13.001 ' 00:05:13.001 16:44:01 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:13.001 16:44:01 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:05:13.001 16:44:01 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:05:13.001 16:44:01 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:05:13.001 16:44:01 -- unit/unittest.sh@174 -- # [[ y == y ]] 00:05:13.001 16:44:01 -- unit/unittest.sh@175 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:13.001 16:44:01 -- unit/unittest.sh@176 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:13.001 16:44:01 -- unit/unittest.sh@178 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:31.112 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:31.112 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:31.112 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:31.112 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:31.112 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:31.112 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:06:03.189 16:44:48 -- unit/unittest.sh@182 -- # uname -m 00:06:03.189 16:44:48 -- unit/unittest.sh@182 -- # '[' x86_64 = aarch64 ']' 00:06:03.189 16:44:48 -- unit/unittest.sh@186 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:03.189 16:44:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.189 16:44:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.189 16:44:48 -- common/autotest_common.sh@10 -- # set +x 00:06:03.189 ************************************ 00:06:03.189 START TEST unittest_pci_event 00:06:03.189 ************************************ 00:06:03.189 16:44:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:03.189 00:06:03.189 00:06:03.189 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.189 http://cunit.sourceforge.net/ 00:06:03.189 00:06:03.189 00:06:03.189 Suite: pci_event 00:06:03.189 Test: test_pci_parse_event ...[2024-11-05 16:44:48.249865] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:06:03.189 passed[2024-11-05 16:44:48.250610] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:03.189 00:06:03.189 00:06:03.189 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.189 suites 1 1 n/a 0 0 00:06:03.189 tests 1 1 1 0 0 00:06:03.189 asserts 15 15 15 0 n/a 00:06:03.189 00:06:03.189 Elapsed time = 0.001 seconds 00:06:03.189 00:06:03.189 real 0m0.035s 00:06:03.189 user 0m0.019s 00:06:03.189 sys 0m0.011s 00:06:03.189 16:44:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.189 16:44:48 -- common/autotest_common.sh@10 -- # set +x 00:06:03.189 ************************************ 00:06:03.189 END TEST unittest_pci_event 00:06:03.189 ************************************ 00:06:03.189 16:44:48 -- unit/unittest.sh@187 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:03.189 16:44:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.189 16:44:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.189 16:44:48 -- common/autotest_common.sh@10 -- # set +x 00:06:03.189 ************************************ 00:06:03.189 START TEST unittest_include 00:06:03.189 ************************************ 00:06:03.189 16:44:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:03.189 00:06:03.189 00:06:03.189 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.189 http://cunit.sourceforge.net/ 00:06:03.189 00:06:03.189 00:06:03.189 Suite: histogram 00:06:03.189 Test: histogram_test ...passed 00:06:03.189 Test: histogram_merge ...passed 00:06:03.189 00:06:03.189 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.189 suites 1 1 n/a 0 0 00:06:03.189 tests 2 2 2 0 0 00:06:03.189 asserts 50 50 50 0 n/a 00:06:03.189 00:06:03.189 Elapsed time = 0.006 seconds 00:06:03.189 00:06:03.189 real 0m0.034s 00:06:03.189 user 0m0.030s 00:06:03.189 sys 0m0.004s 00:06:03.189 16:44:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.189 16:44:48 -- common/autotest_common.sh@10 -- # set +x 00:06:03.189 ************************************ 00:06:03.189 END TEST unittest_include 00:06:03.189 ************************************ 00:06:03.189 16:44:48 -- unit/unittest.sh@188 -- # run_test unittest_bdev unittest_bdev 00:06:03.189 16:44:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.189 16:44:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.189 16:44:48 -- common/autotest_common.sh@10 -- # set +x 00:06:03.189 ************************************ 00:06:03.189 START TEST unittest_bdev 00:06:03.189 ************************************ 00:06:03.189 16:44:48 -- common/autotest_common.sh@1114 -- # unittest_bdev 00:06:03.189 16:44:48 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:03.189 00:06:03.189 00:06:03.189 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.189 http://cunit.sourceforge.net/ 00:06:03.189 00:06:03.189 00:06:03.189 Suite: bdev 00:06:03.189 Test: bytes_to_blocks_test ...passed 00:06:03.189 Test: num_blocks_test ...passed 00:06:03.189 Test: io_valid_test ...passed 00:06:03.189 Test: open_write_test ...[2024-11-05 16:44:48.493146] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:03.189 [2024-11-05 16:44:48.493477] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:03.189 [2024-11-05 16:44:48.493628] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:03.189 passed 00:06:03.189 Test: claim_test ...passed 00:06:03.189 Test: alias_add_del_test ...[2024-11-05 16:44:48.586613] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:03.189 [2024-11-05 16:44:48.586787] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:03.189 [2024-11-05 16:44:48.586845] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:03.189 passed 00:06:03.189 Test: get_device_stat_test ...passed 00:06:03.189 Test: bdev_io_types_test ...passed 00:06:03.189 Test: bdev_io_wait_test ...passed 00:06:03.189 Test: bdev_io_spans_split_test ...passed 00:06:03.189 Test: bdev_io_boundary_split_test ...passed 00:06:03.189 Test: bdev_io_max_size_and_segment_split_test ...[2024-11-05 16:44:48.759107] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:03.189 passed 00:06:03.189 Test: bdev_io_mix_split_test ...passed 00:06:03.189 Test: bdev_io_split_with_io_wait ...passed 00:06:03.189 Test: bdev_io_write_unit_split_test ...[2024-11-05 16:44:48.861465] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:03.190 [2024-11-05 16:44:48.861614] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:03.190 [2024-11-05 16:44:48.861659] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:03.190 [2024-11-05 16:44:48.861753] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:03.190 passed 00:06:03.190 Test: bdev_io_alignment_with_boundary ...passed 00:06:03.190 Test: bdev_io_alignment ...passed 00:06:03.190 Test: bdev_histograms ...passed 00:06:03.190 Test: bdev_write_zeroes ...passed 00:06:03.190 Test: bdev_compare_and_write ...passed 00:06:03.190 Test: bdev_compare ...passed 00:06:03.190 Test: bdev_compare_emulated ...passed 00:06:03.190 Test: bdev_zcopy_write ...passed 00:06:03.190 Test: bdev_zcopy_read ...passed 00:06:03.190 Test: bdev_open_while_hotremove ...passed 00:06:03.190 Test: bdev_close_while_hotremove ...passed 00:06:03.190 Test: bdev_open_ext_test ...[2024-11-05 16:44:49.268355] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:03.190 passed 00:06:03.190 Test: bdev_open_ext_unregister ...[2024-11-05 16:44:49.268552] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:03.190 passed 00:06:03.190 Test: bdev_set_io_timeout ...passed 00:06:03.190 Test: bdev_set_qd_sampling ...passed 00:06:03.190 Test: lba_range_overlap ...passed 00:06:03.190 Test: lock_lba_range_check_ranges ...passed 00:06:03.190 Test: lock_lba_range_with_io_outstanding ...passed 00:06:03.190 Test: lock_lba_range_overlapped ...passed 00:06:03.190 Test: bdev_quiesce ...[2024-11-05 16:44:49.464593] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:03.190 passed 00:06:03.190 Test: bdev_io_abort ...passed 00:06:03.190 Test: bdev_unmap ...passed 00:06:03.190 Test: bdev_write_zeroes_split_test ...passed 00:06:03.190 Test: bdev_set_options_test ...[2024-11-05 16:44:49.575808] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:03.190 passed 00:06:03.190 Test: bdev_get_memory_domains ...passed 00:06:03.190 Test: bdev_io_ext ...passed 00:06:03.190 Test: bdev_io_ext_no_opts ...passed 00:06:03.190 Test: bdev_io_ext_invalid_opts ...passed 00:06:03.190 Test: bdev_io_ext_split ...passed 00:06:03.190 Test: bdev_io_ext_bounce_buffer ...passed 00:06:03.190 Test: bdev_register_uuid_alias ...[2024-11-05 16:44:49.765992] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 4a4094a1-18d8-4ba1-a668-8ce6547b9087 already exists 00:06:03.190 [2024-11-05 16:44:49.766136] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:4a4094a1-18d8-4ba1-a668-8ce6547b9087 alias for bdev bdev0 00:06:03.190 passed 00:06:03.190 Test: bdev_unregister_by_name ...[2024-11-05 16:44:49.783124] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:03.190 [2024-11-05 16:44:49.783183] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:03.190 passed 00:06:03.190 Test: for_each_bdev_test ...passed 00:06:03.190 Test: bdev_seek_test ...passed 00:06:03.190 Test: bdev_copy ...passed 00:06:03.190 Test: bdev_copy_split_test ...passed 00:06:03.190 Test: examine_locks ...passed 00:06:03.190 Test: claim_v2_rwo ...[2024-11-05 16:44:49.886225] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.886339] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.886371] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.886464] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.886494] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.886563] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:03.190 passed 00:06:03.190 Test: claim_v2_rom ...[2024-11-05 16:44:49.886811] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.886936] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.886984] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.887030] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.887118] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:03.190 passed 00:06:03.190 Test: claim_v2_rwm ...[2024-11-05 16:44:49.887186] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:03.190 [2024-11-05 16:44:49.887345] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:03.190 [2024-11-05 16:44:49.887431] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.887483] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.887529] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.887560] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.887603] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.887663] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:03.190 passed 00:06:03.190 Test: claim_v2_existing_writer ...[2024-11-05 16:44:49.887867] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:03.190 passed 00:06:03.190 Test: claim_v2_existing_v1 ...[2024-11-05 16:44:49.887921] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:03.190 [2024-11-05 16:44:49.888081] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.888132] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.888162] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:03.190 passed 00:06:03.190 Test: claim_v1_existing_v2 ...[2024-11-05 16:44:49.888319] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.888396] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:03.190 [2024-11-05 16:44:49.888448] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:03.190 passed 00:06:03.190 Test: examine_claimed ...[2024-11-05 16:44:49.888839] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:03.190 passed 00:06:03.190 00:06:03.190 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.190 suites 1 1 n/a 0 0 00:06:03.190 tests 59 59 59 0 0 00:06:03.190 asserts 4599 4599 4599 0 n/a 00:06:03.190 00:06:03.190 Elapsed time = 1.470 seconds 00:06:03.190 16:44:49 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:03.190 00:06:03.190 00:06:03.190 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.190 http://cunit.sourceforge.net/ 00:06:03.190 00:06:03.190 00:06:03.190 Suite: nvme 00:06:03.190 Test: test_create_ctrlr ...passed 00:06:03.190 Test: test_reset_ctrlr ...[2024-11-05 16:44:49.938228] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.190 passed 00:06:03.190 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:03.190 Test: test_failover_ctrlr ...passed 00:06:03.190 Test: test_race_between_failover_and_add_secondary_trid ...[2024-11-05 16:44:49.940863] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.190 [2024-11-05 16:44:49.941162] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.190 [2024-11-05 16:44:49.941364] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.190 passed 00:06:03.190 Test: test_pending_reset ...[2024-11-05 16:44:49.942948] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.190 [2024-11-05 16:44:49.943210] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.190 passed 00:06:03.190 Test: test_attach_ctrlr ...[2024-11-05 16:44:49.944391] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:03.190 passed 00:06:03.190 Test: test_aer_cb ...passed 00:06:03.190 Test: test_submit_nvme_cmd ...passed 00:06:03.190 Test: test_add_remove_trid ...passed 00:06:03.190 Test: test_abort ...[2024-11-05 16:44:49.948078] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:03.190 passed 00:06:03.190 Test: test_get_io_qpair ...passed 00:06:03.190 Test: test_bdev_unregister ...passed 00:06:03.190 Test: test_compare_ns ...passed 00:06:03.190 Test: test_init_ana_log_page ...passed 00:06:03.190 Test: test_get_memory_domains ...passed 00:06:03.190 Test: test_reconnect_qpair ...[2024-11-05 16:44:49.951024] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.191 passed 00:06:03.191 Test: test_create_bdev_ctrlr ...[2024-11-05 16:44:49.951542] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:03.191 passed 00:06:03.191 Test: test_add_multi_ns_to_bdev ...[2024-11-05 16:44:49.952898] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:03.191 passed 00:06:03.191 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:03.191 Test: test_admin_path ...passed 00:06:03.191 Test: test_reset_bdev_ctrlr ...passed 00:06:03.191 Test: test_find_io_path ...passed 00:06:03.191 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:03.191 Test: test_retry_io_for_io_path_error ...passed 00:06:03.191 Test: test_retry_io_count ...passed 00:06:03.191 Test: test_concurrent_read_ana_log_page ...passed 00:06:03.191 Test: test_retry_io_for_ana_error ...passed 00:06:03.191 Test: test_check_io_error_resiliency_params ...[2024-11-05 16:44:49.960137] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:03.191 [2024-11-05 16:44:49.960217] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:03.191 [2024-11-05 16:44:49.960244] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:03.191 [2024-11-05 16:44:49.960284] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:03.191 [2024-11-05 16:44:49.960305] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:03.191 [2024-11-05 16:44:49.960345] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:03.191 [2024-11-05 16:44:49.960366] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:03.191 passed 00:06:03.191 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-11-05 16:44:49.960423] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:03.191 [2024-11-05 16:44:49.960452] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:03.191 passed 00:06:03.191 Test: test_reconnect_ctrlr ...[2024-11-05 16:44:49.961239] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.191 [2024-11-05 16:44:49.961419] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.191 [2024-11-05 16:44:49.961744] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.191 [2024-11-05 16:44:49.961872] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.191 [2024-11-05 16:44:49.962005] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.191 passed 00:06:03.191 Test: test_retry_failover_ctrlr ...[2024-11-05 16:44:49.962378] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.191 passed 00:06:03.191 Test: test_fail_path ...[2024-11-05 16:44:49.962930] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.191 [2024-11-05 16:44:49.963103] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.191 [2024-11-05 16:44:49.963221] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.191 [2024-11-05 16:44:49.963345] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.191 [2024-11-05 16:44:49.963489] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.191 passed 00:06:03.191 Test: test_nvme_ns_cmp ...passed 00:06:03.191 Test: test_ana_transition ...passed 00:06:03.191 Test: test_set_preferred_path ...passed 00:06:03.191 Test: test_find_next_io_path ...passed 00:06:03.191 Test: test_find_io_path_min_qd ...passed 00:06:03.191 Test: test_disable_auto_failback ...[2024-11-05 16:44:49.965236] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.191 passed 00:06:03.191 Test: test_set_multipath_policy ...passed 00:06:03.191 Test: test_uuid_generation ...passed 00:06:03.191 Test: test_retry_io_to_same_path ...passed 00:06:03.191 Test: test_race_between_reset_and_disconnected ...passed 00:06:03.191 Test: test_ctrlr_op_rpc ...passed 00:06:03.191 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:03.191 Test: test_disable_enable_ctrlr ...[2024-11-05 16:44:49.968925] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.191 [2024-11-05 16:44:49.969111] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:03.191 passed 00:06:03.191 Test: test_delete_ctrlr_done ...passed 00:06:03.191 Test: test_ns_remove_during_reset ...passed 00:06:03.191 00:06:03.191 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.191 suites 1 1 n/a 0 0 00:06:03.191 tests 48 48 48 0 0 00:06:03.191 asserts 3553 3553 3553 0 n/a 00:06:03.191 00:06:03.191 Elapsed time = 0.033 seconds 00:06:03.191 16:44:49 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:03.191 Test Options 00:06:03.191 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:06:03.191 00:06:03.191 00:06:03.191 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.191 http://cunit.sourceforge.net/ 00:06:03.191 00:06:03.191 00:06:03.191 Suite: raid 00:06:03.191 Test: test_create_raid ...passed 00:06:03.191 Test: test_create_raid_superblock ...passed 00:06:03.191 Test: test_delete_raid ...passed 00:06:03.191 Test: test_create_raid_invalid_args ...[2024-11-05 16:44:50.014069] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:03.191 [2024-11-05 16:44:50.014455] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:03.191 [2024-11-05 16:44:50.014878] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:03.191 [2024-11-05 16:44:50.015091] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:03.191 [2024-11-05 16:44:50.015708] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:03.191 passed 00:06:03.191 Test: test_delete_raid_invalid_args ...passed 00:06:03.191 Test: test_io_channel ...passed 00:06:03.191 Test: test_reset_io ...passed 00:06:03.191 Test: test_write_io ...passed 00:06:03.191 Test: test_read_io ...passed 00:06:03.191 Test: test_unmap_io ...passed 00:06:03.191 Test: test_io_failure ...[2024-11-05 16:44:50.879809] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:06:03.191 passed 00:06:03.191 Test: test_multi_raid_no_io ...passed 00:06:03.191 Test: test_multi_raid_with_io ...passed 00:06:03.191 Test: test_io_type_supported ...passed 00:06:03.191 Test: test_raid_json_dump_info ...passed 00:06:03.191 Test: test_context_size ...passed 00:06:03.191 Test: test_raid_level_conversions ...passed 00:06:03.191 Test: test_raid_process ...passed 00:06:03.191 Test: test_raid_io_split ...passed 00:06:03.191 00:06:03.191 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.191 suites 1 1 n/a 0 0 00:06:03.191 tests 19 19 19 0 0 00:06:03.191 asserts 177879 177879 177879 0 n/a 00:06:03.191 00:06:03.191 Elapsed time = 0.879 seconds 00:06:03.191 16:44:50 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:03.191 00:06:03.191 00:06:03.191 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.191 http://cunit.sourceforge.net/ 00:06:03.191 00:06:03.191 00:06:03.191 Suite: raid_sb 00:06:03.191 Test: test_raid_bdev_write_superblock ...passed 00:06:03.191 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:03.191 Test: test_raid_bdev_parse_superblock ...[2024-11-05 16:44:50.932903] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:03.191 passed 00:06:03.191 00:06:03.191 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.191 suites 1 1 n/a 0 0 00:06:03.191 tests 3 3 3 0 0 00:06:03.191 asserts 32 32 32 0 n/a 00:06:03.191 00:06:03.191 Elapsed time = 0.001 seconds 00:06:03.191 16:44:50 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:03.191 00:06:03.191 00:06:03.191 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.191 http://cunit.sourceforge.net/ 00:06:03.191 00:06:03.191 00:06:03.191 Suite: concat 00:06:03.191 Test: test_concat_start ...passed 00:06:03.191 Test: test_concat_rw ...passed 00:06:03.191 Test: test_concat_null_payload ...passed 00:06:03.191 00:06:03.191 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.191 suites 1 1 n/a 0 0 00:06:03.191 tests 3 3 3 0 0 00:06:03.191 asserts 8097 8097 8097 0 n/a 00:06:03.191 00:06:03.191 Elapsed time = 0.007 seconds 00:06:03.191 16:44:50 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:03.191 00:06:03.191 00:06:03.191 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.191 http://cunit.sourceforge.net/ 00:06:03.191 00:06:03.191 00:06:03.191 Suite: raid1 00:06:03.191 Test: test_raid1_start ...passed 00:06:03.192 Test: test_raid1_read_balancing ...passed 00:06:03.192 00:06:03.192 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.192 suites 1 1 n/a 0 0 00:06:03.192 tests 2 2 2 0 0 00:06:03.192 asserts 2856 2856 2856 0 n/a 00:06:03.192 00:06:03.192 Elapsed time = 0.004 seconds 00:06:03.192 16:44:51 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:03.192 00:06:03.192 00:06:03.192 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.192 http://cunit.sourceforge.net/ 00:06:03.192 00:06:03.192 00:06:03.192 Suite: zone 00:06:03.192 Test: test_zone_get_operation ...passed 00:06:03.192 Test: test_bdev_zone_get_info ...passed 00:06:03.192 Test: test_bdev_zone_management ...passed 00:06:03.192 Test: test_bdev_zone_append ...passed 00:06:03.192 Test: test_bdev_zone_append_with_md ...passed 00:06:03.192 Test: test_bdev_zone_appendv ...passed 00:06:03.192 Test: test_bdev_zone_appendv_with_md ...passed 00:06:03.192 Test: test_bdev_io_get_append_location ...passed 00:06:03.192 00:06:03.192 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.192 suites 1 1 n/a 0 0 00:06:03.192 tests 8 8 8 0 0 00:06:03.192 asserts 94 94 94 0 n/a 00:06:03.192 00:06:03.192 Elapsed time = 0.000 seconds 00:06:03.192 16:44:51 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:03.192 00:06:03.192 00:06:03.192 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.192 http://cunit.sourceforge.net/ 00:06:03.192 00:06:03.192 00:06:03.192 Suite: gpt_parse 00:06:03.192 Test: test_parse_mbr_and_primary ...[2024-11-05 16:44:51.067536] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:03.192 [2024-11-05 16:44:51.067925] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:03.192 [2024-11-05 16:44:51.068045] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:03.192 [2024-11-05 16:44:51.068181] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:03.192 [2024-11-05 16:44:51.068277] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:03.192 [2024-11-05 16:44:51.068425] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:03.192 passed 00:06:03.192 Test: test_parse_secondary ...[2024-11-05 16:44:51.069505] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:03.192 [2024-11-05 16:44:51.069621] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:03.192 [2024-11-05 16:44:51.069717] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:03.192 [2024-11-05 16:44:51.069783] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:03.192 passed 00:06:03.192 Test: test_check_mbr ...[2024-11-05 16:44:51.070718] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:03.192 passed 00:06:03.192 Test: test_read_header ...[2024-11-05 16:44:51.070825] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:03.192 [2024-11-05 16:44:51.070958] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:03.192 [2024-11-05 16:44:51.071114] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:03.192 [2024-11-05 16:44:51.071248] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:03.192 [2024-11-05 16:44:51.071348] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:03.192 [2024-11-05 16:44:51.071422] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:03.192 [2024-11-05 16:44:51.071498] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:03.192 passed 00:06:03.192 Test: test_read_partitions ...[2024-11-05 16:44:51.071613] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:03.192 [2024-11-05 16:44:51.071716] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:03.192 [2024-11-05 16:44:51.071778] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:03.192 [2024-11-05 16:44:51.071840] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:03.192 [2024-11-05 16:44:51.072312] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:03.192 passed 00:06:03.192 00:06:03.192 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.192 suites 1 1 n/a 0 0 00:06:03.192 tests 5 5 5 0 0 00:06:03.192 asserts 33 33 33 0 n/a 00:06:03.192 00:06:03.192 Elapsed time = 0.006 seconds 00:06:03.192 16:44:51 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:03.192 00:06:03.192 00:06:03.192 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.192 http://cunit.sourceforge.net/ 00:06:03.192 00:06:03.192 00:06:03.192 Suite: bdev_part 00:06:03.192 Test: part_test ...[2024-11-05 16:44:51.112218] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:06:03.192 passed 00:06:03.192 Test: part_free_test ...passed 00:06:03.192 Test: part_get_io_channel_test ...passed 00:06:03.192 Test: part_construct_ext ...passed 00:06:03.192 00:06:03.192 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.192 suites 1 1 n/a 0 0 00:06:03.192 tests 4 4 4 0 0 00:06:03.192 asserts 48 48 48 0 n/a 00:06:03.192 00:06:03.192 Elapsed time = 0.053 seconds 00:06:03.192 16:44:51 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:03.192 00:06:03.192 00:06:03.192 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.192 http://cunit.sourceforge.net/ 00:06:03.192 00:06:03.192 00:06:03.192 Suite: scsi_nvme_suite 00:06:03.192 Test: scsi_nvme_translate_test ...passed 00:06:03.192 00:06:03.192 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.192 suites 1 1 n/a 0 0 00:06:03.192 tests 1 1 1 0 0 00:06:03.192 asserts 104 104 104 0 n/a 00:06:03.192 00:06:03.192 Elapsed time = 0.000 seconds 00:06:03.192 16:44:51 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:03.192 00:06:03.192 00:06:03.192 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.192 http://cunit.sourceforge.net/ 00:06:03.192 00:06:03.192 00:06:03.192 Suite: lvol 00:06:03.192 Test: ut_lvs_init ...[2024-11-05 16:44:51.235292] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:03.192 [2024-11-05 16:44:51.235701] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:03.192 passed 00:06:03.192 Test: ut_lvol_init ...passed 00:06:03.192 Test: ut_lvol_snapshot ...passed 00:06:03.192 Test: ut_lvol_clone ...passed 00:06:03.192 Test: ut_lvs_destroy ...passed 00:06:03.192 Test: ut_lvs_unload ...passed 00:06:03.192 Test: ut_lvol_resize ...[2024-11-05 16:44:51.237071] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:03.192 passed 00:06:03.192 Test: ut_lvol_set_read_only ...passed 00:06:03.192 Test: ut_lvol_hotremove ...passed 00:06:03.192 Test: ut_vbdev_lvol_get_io_channel ...passed 00:06:03.192 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:03.192 Test: ut_lvol_read_write ...passed 00:06:03.192 Test: ut_vbdev_lvol_submit_request ...passed 00:06:03.192 Test: ut_lvol_examine_config ...passed 00:06:03.192 Test: ut_lvol_examine_disk ...[2024-11-05 16:44:51.237758] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:03.192 passed 00:06:03.192 Test: ut_lvol_rename ...[2024-11-05 16:44:51.238799] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:03.192 [2024-11-05 16:44:51.238942] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:03.192 passed 00:06:03.192 Test: ut_bdev_finish ...passed 00:06:03.192 Test: ut_lvs_rename ...passed 00:06:03.192 Test: ut_lvol_seek ...passed 00:06:03.192 Test: ut_esnap_dev_create ...[2024-11-05 16:44:51.239672] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:03.192 [2024-11-05 16:44:51.239758] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:03.192 [2024-11-05 16:44:51.239786] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:03.192 [2024-11-05 16:44:51.239835] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:06:03.192 passed 00:06:03.193 Test: ut_lvol_esnap_clone_bad_args ...[2024-11-05 16:44:51.240002] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:03.193 [2024-11-05 16:44:51.240044] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:06:03.193 passed 00:06:03.193 00:06:03.193 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.193 suites 1 1 n/a 0 0 00:06:03.193 tests 21 21 21 0 0 00:06:03.193 asserts 712 712 712 0 n/a 00:06:03.193 00:06:03.193 Elapsed time = 0.005 seconds 00:06:03.193 16:44:51 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:03.193 00:06:03.193 00:06:03.193 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.193 http://cunit.sourceforge.net/ 00:06:03.193 00:06:03.193 00:06:03.193 Suite: zone_block 00:06:03.193 Test: test_zone_block_create ...passed 00:06:03.193 Test: test_zone_block_create_invalid ...[2024-11-05 16:44:51.299758] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:03.193 [2024-11-05 16:44:51.300421] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-05 16:44:51.300807] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:03.193 [2024-11-05 16:44:51.301043] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-05 16:44:51.301357] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:03.193 [2024-11-05 16:44:51.301550] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-11-05 16:44:51.301822] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:03.193 [2024-11-05 16:44:51.302037] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:03.193 Test: test_get_zone_info ...[2024-11-05 16:44:51.302843] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 [2024-11-05 16:44:51.303089] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 [2024-11-05 16:44:51.303304] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 passed 00:06:03.193 Test: test_supported_io_types ...passed 00:06:03.193 Test: test_reset_zone ...[2024-11-05 16:44:51.304311] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 [2024-11-05 16:44:51.304515] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 passed 00:06:03.193 Test: test_open_zone ...[2024-11-05 16:44:51.305174] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 [2024-11-05 16:44:51.306020] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 [2024-11-05 16:44:51.306224] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 passed 00:06:03.193 Test: test_zone_write ...[2024-11-05 16:44:51.306860] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:03.193 [2024-11-05 16:44:51.307109] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 [2024-11-05 16:44:51.307323] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:03.193 [2024-11-05 16:44:51.307526] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 [2024-11-05 16:44:51.313273] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:03.193 [2024-11-05 16:44:51.313470] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 [2024-11-05 16:44:51.313701] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:03.193 [2024-11-05 16:44:51.313890] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 [2024-11-05 16:44:51.319591] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:03.193 [2024-11-05 16:44:51.319798] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 passed 00:06:03.193 Test: test_zone_read ...[2024-11-05 16:44:51.320450] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:03.193 [2024-11-05 16:44:51.320650] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 [2024-11-05 16:44:51.320890] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:03.193 [2024-11-05 16:44:51.321065] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 [2024-11-05 16:44:51.321680] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:03.193 [2024-11-05 16:44:51.321854] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 passed 00:06:03.193 Test: test_close_zone ...[2024-11-05 16:44:51.322378] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 [2024-11-05 16:44:51.322583] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 [2024-11-05 16:44:51.322998] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 [2024-11-05 16:44:51.323177] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 passed 00:06:03.193 Test: test_finish_zone ...[2024-11-05 16:44:51.323932] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 [2024-11-05 16:44:51.324120] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 passed 00:06:03.193 Test: test_append_zone ...[2024-11-05 16:44:51.324683] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:03.193 [2024-11-05 16:44:51.324852] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 [2024-11-05 16:44:51.325068] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:03.193 [2024-11-05 16:44:51.325224] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 [2024-11-05 16:44:51.336636] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:03.193 [2024-11-05 16:44:51.336824] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:03.193 passed 00:06:03.193 00:06:03.193 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.193 suites 1 1 n/a 0 0 00:06:03.193 tests 11 11 11 0 0 00:06:03.193 asserts 3437 3437 3437 0 n/a 00:06:03.193 00:06:03.193 Elapsed time = 0.034 seconds 00:06:03.193 16:44:51 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:03.193 00:06:03.193 00:06:03.193 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.193 http://cunit.sourceforge.net/ 00:06:03.193 00:06:03.193 00:06:03.193 Suite: bdev 00:06:03.193 Test: basic ...[2024-11-05 16:44:51.418307] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55b07e06e401): Operation not permitted (rc=-1) 00:06:03.193 [2024-11-05 16:44:51.418632] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55b07e06e3c0): Operation not permitted (rc=-1) 00:06:03.193 [2024-11-05 16:44:51.418691] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55b07e06e401): Operation not permitted (rc=-1) 00:06:03.193 passed 00:06:03.193 Test: unregister_and_close ...passed 00:06:03.193 Test: unregister_and_close_different_threads ...passed 00:06:03.193 Test: basic_qos ...passed 00:06:03.193 Test: put_channel_during_reset ...passed 00:06:03.193 Test: aborted_reset ...passed 00:06:03.193 Test: aborted_reset_no_outstanding_io ...passed 00:06:03.193 Test: io_during_reset ...passed 00:06:03.193 Test: reset_completions ...passed 00:06:03.193 Test: io_during_qos_queue ...passed 00:06:03.193 Test: io_during_qos_reset ...passed 00:06:03.193 Test: enomem ...passed 00:06:03.193 Test: enomem_multi_bdev ...passed 00:06:03.193 Test: enomem_multi_bdev_unregister ...passed 00:06:03.193 Test: enomem_multi_io_target ...passed 00:06:03.193 Test: qos_dynamic_enable ...passed 00:06:03.193 Test: bdev_histograms_mt ...passed 00:06:03.193 Test: bdev_set_io_timeout_mt ...[2024-11-05 16:44:52.065730] thread.c: 467:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:06:03.193 passed 00:06:03.451 Test: lock_lba_range_then_submit_io ...[2024-11-05 16:44:52.082447] thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x55b07e06e380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:06:03.451 passed 00:06:03.451 Test: unregister_during_reset ...passed 00:06:03.451 Test: event_notify_and_close ...passed 00:06:03.452 Test: unregister_and_qos_poller ...passed 00:06:03.452 Suite: bdev_wrong_thread 00:06:03.452 Test: spdk_bdev_register_wt ...[2024-11-05 16:44:52.208211] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:06:03.452 passed 00:06:03.452 Test: spdk_bdev_examine_wt ...[2024-11-05 16:44:52.208535] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:06:03.452 passed 00:06:03.452 00:06:03.452 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.452 suites 2 2 n/a 0 0 00:06:03.452 tests 24 24 24 0 0 00:06:03.452 asserts 621 621 621 0 n/a 00:06:03.452 00:06:03.452 Elapsed time = 0.814 seconds 00:06:03.452 00:06:03.452 real 0m3.838s 00:06:03.452 user 0m1.694s 00:06:03.452 sys 0m2.137s 00:06:03.452 16:44:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.452 16:44:52 -- common/autotest_common.sh@10 -- # set +x 00:06:03.452 ************************************ 00:06:03.452 END TEST unittest_bdev 00:06:03.452 ************************************ 00:06:03.452 16:44:52 -- unit/unittest.sh@189 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:03.452 16:44:52 -- unit/unittest.sh@194 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:03.452 16:44:52 -- unit/unittest.sh@199 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:03.452 16:44:52 -- unit/unittest.sh@203 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:03.452 16:44:52 -- unit/unittest.sh@204 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:03.452 16:44:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.452 16:44:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.452 16:44:52 -- common/autotest_common.sh@10 -- # set +x 00:06:03.452 ************************************ 00:06:03.452 START TEST unittest_bdev_raid5f 00:06:03.452 ************************************ 00:06:03.452 16:44:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:03.452 00:06:03.452 00:06:03.452 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.452 http://cunit.sourceforge.net/ 00:06:03.452 00:06:03.452 00:06:03.452 Suite: raid5f 00:06:03.452 Test: test_raid5f_start ...passed 00:06:04.017 Test: test_raid5f_submit_read_request ...passed 00:06:04.017 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:07.320 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:06:25.457 Test: test_raid5f_chunk_write_error ...passed 00:06:30.724 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:06:32.633 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:06:59.221 Test: test_raid5f_submit_read_request_degraded ...passed 00:06:59.221 00:06:59.221 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.221 suites 1 1 n/a 0 0 00:06:59.221 tests 8 8 8 0 0 00:06:59.221 asserts 351864 351864 351864 0 n/a 00:06:59.221 00:06:59.221 Elapsed time = 54.401 seconds 00:06:59.221 00:06:59.221 real 0m54.509s 00:06:59.221 user 0m51.703s 00:06:59.221 sys 0m2.784s 00:06:59.221 16:45:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.221 ************************************ 00:06:59.221 16:45:46 -- common/autotest_common.sh@10 -- # set +x 00:06:59.221 END TEST unittest_bdev_raid5f 00:06:59.221 ************************************ 00:06:59.221 16:45:46 -- unit/unittest.sh@207 -- # run_test unittest_blob_blobfs unittest_blob 00:06:59.221 16:45:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:59.221 16:45:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.221 16:45:46 -- common/autotest_common.sh@10 -- # set +x 00:06:59.221 ************************************ 00:06:59.221 START TEST unittest_blob_blobfs 00:06:59.222 ************************************ 00:06:59.222 16:45:46 -- common/autotest_common.sh@1114 -- # unittest_blob 00:06:59.222 16:45:46 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:06:59.222 16:45:46 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:06:59.222 00:06:59.222 00:06:59.222 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.222 http://cunit.sourceforge.net/ 00:06:59.222 00:06:59.222 00:06:59.222 Suite: blob_nocopy_noextent 00:06:59.222 Test: blob_init ...[2024-11-05 16:45:46.903137] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:59.222 passed 00:06:59.222 Test: blob_thin_provision ...passed 00:06:59.222 Test: blob_read_only ...passed 00:06:59.222 Test: bs_load ...[2024-11-05 16:45:47.015989] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:59.222 passed 00:06:59.222 Test: bs_load_custom_cluster_size ...passed 00:06:59.222 Test: bs_load_after_failed_grow ...passed 00:06:59.222 Test: bs_cluster_sz ...[2024-11-05 16:45:47.054259] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:59.222 [2024-11-05 16:45:47.055084] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:59.222 [2024-11-05 16:45:47.055538] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:59.222 passed 00:06:59.222 Test: bs_resize_md ...passed 00:06:59.222 Test: bs_destroy ...passed 00:06:59.222 Test: bs_type ...passed 00:06:59.222 Test: bs_super_block ...passed 00:06:59.222 Test: bs_test_recover_cluster_count ...passed 00:06:59.222 Test: bs_grow_live ...passed 00:06:59.222 Test: bs_grow_live_no_space ...passed 00:06:59.222 Test: bs_test_grow ...passed 00:06:59.222 Test: blob_serialize_test ...passed 00:06:59.222 Test: super_block_crc ...passed 00:06:59.222 Test: blob_thin_prov_write_count_io ...passed 00:06:59.222 Test: bs_load_iter_test ...passed 00:06:59.222 Test: blob_relations ...[2024-11-05 16:45:47.239982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:59.222 [2024-11-05 16:45:47.240422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:59.222 [2024-11-05 16:45:47.241634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:59.222 [2024-11-05 16:45:47.241874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:59.222 passed 00:06:59.222 Test: blob_relations2 ...[2024-11-05 16:45:47.259705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:59.222 [2024-11-05 16:45:47.260080] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:59.222 [2024-11-05 16:45:47.260320] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:59.222 [2024-11-05 16:45:47.260514] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:59.222 [2024-11-05 16:45:47.262234] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:59.222 [2024-11-05 16:45:47.262480] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:59.222 [2024-11-05 16:45:47.263265] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:59.222 [2024-11-05 16:45:47.263544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:59.222 passed 00:06:59.222 Test: blob_relations3 ...passed 00:06:59.222 Test: blobstore_clean_power_failure ...passed 00:06:59.222 Test: blob_delete_snapshot_power_failure ...[2024-11-05 16:45:47.429541] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:59.222 [2024-11-05 16:45:47.443466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:59.222 [2024-11-05 16:45:47.443824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:59.222 [2024-11-05 16:45:47.443953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:59.222 [2024-11-05 16:45:47.457473] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:59.222 [2024-11-05 16:45:47.457797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:59.222 [2024-11-05 16:45:47.457940] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:59.222 [2024-11-05 16:45:47.458132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:59.222 [2024-11-05 16:45:47.471954] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:59.222 [2024-11-05 16:45:47.472406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:59.222 [2024-11-05 16:45:47.485904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:59.222 [2024-11-05 16:45:47.486292] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:59.222 [2024-11-05 16:45:47.500660] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:59.222 [2024-11-05 16:45:47.501024] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:59.222 passed 00:06:59.222 Test: blob_create_snapshot_power_failure ...[2024-11-05 16:45:47.540710] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:59.222 [2024-11-05 16:45:47.567490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:59.222 [2024-11-05 16:45:47.580962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:59.222 passed 00:06:59.222 Test: blob_io_unit ...passed 00:06:59.222 Test: blob_io_unit_compatibility ...passed 00:06:59.222 Test: blob_ext_md_pages ...passed 00:06:59.222 Test: blob_esnap_io_4096_4096 ...passed 00:06:59.222 Test: blob_esnap_io_512_512 ...passed 00:06:59.222 Test: blob_esnap_io_4096_512 ...passed 00:06:59.222 Test: blob_esnap_io_512_4096 ...passed 00:06:59.222 Suite: blob_bs_nocopy_noextent 00:06:59.222 Test: blob_open ...passed 00:06:59.222 Test: blob_create ...[2024-11-05 16:45:47.841015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:59.222 passed 00:06:59.222 Test: blob_create_loop ...passed 00:06:59.222 Test: blob_create_fail ...[2024-11-05 16:45:47.952079] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:59.222 passed 00:06:59.222 Test: blob_create_internal ...passed 00:06:59.222 Test: blob_create_zero_extent ...passed 00:06:59.222 Test: blob_snapshot ...passed 00:06:59.484 Test: blob_clone ...passed 00:06:59.484 Test: blob_inflate ...[2024-11-05 16:45:48.149174] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:59.484 passed 00:06:59.484 Test: blob_delete ...passed 00:06:59.484 Test: blob_resize_test ...[2024-11-05 16:45:48.215258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:59.484 passed 00:06:59.484 Test: channel_ops ...passed 00:06:59.484 Test: blob_super ...passed 00:06:59.484 Test: blob_rw_verify_iov ...passed 00:06:59.484 Test: blob_unmap ...passed 00:06:59.743 Test: blob_iter ...passed 00:06:59.743 Test: blob_parse_md ...passed 00:06:59.743 Test: bs_load_pending_removal ...passed 00:06:59.743 Test: bs_unload ...[2024-11-05 16:45:48.473880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:59.743 passed 00:06:59.743 Test: bs_usable_clusters ...passed 00:06:59.743 Test: blob_crc ...[2024-11-05 16:45:48.539716] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:59.743 [2024-11-05 16:45:48.540244] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:59.743 passed 00:06:59.743 Test: blob_flags ...passed 00:06:59.743 Test: bs_version ...passed 00:07:00.001 Test: blob_set_xattrs_test ...[2024-11-05 16:45:48.640091] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:00.001 [2024-11-05 16:45:48.640550] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:00.001 passed 00:07:00.001 Test: blob_thin_prov_alloc ...passed 00:07:00.001 Test: blob_insert_cluster_msg_test ...passed 00:07:00.001 Test: blob_thin_prov_rw ...passed 00:07:00.260 Test: blob_thin_prov_rle ...passed 00:07:00.260 Test: blob_thin_prov_rw_iov ...passed 00:07:00.260 Test: blob_snapshot_rw ...passed 00:07:00.260 Test: blob_snapshot_rw_iov ...passed 00:07:00.518 Test: blob_inflate_rw ...passed 00:07:00.518 Test: blob_snapshot_freeze_io ...passed 00:07:00.777 Test: blob_operation_split_rw ...passed 00:07:01.035 Test: blob_operation_split_rw_iov ...passed 00:07:01.035 Test: blob_simultaneous_operations ...[2024-11-05 16:45:49.691165] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:01.035 [2024-11-05 16:45:49.691647] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.035 [2024-11-05 16:45:49.692992] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:01.035 [2024-11-05 16:45:49.693215] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.035 [2024-11-05 16:45:49.704505] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:01.035 [2024-11-05 16:45:49.704731] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.035 [2024-11-05 16:45:49.705082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:01.035 [2024-11-05 16:45:49.705326] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.035 passed 00:07:01.035 Test: blob_persist_test ...passed 00:07:01.035 Test: blob_decouple_snapshot ...passed 00:07:01.035 Test: blob_seek_io_unit ...passed 00:07:01.035 Test: blob_nested_freezes ...passed 00:07:01.035 Suite: blob_blob_nocopy_noextent 00:07:01.293 Test: blob_write ...passed 00:07:01.293 Test: blob_read ...passed 00:07:01.293 Test: blob_rw_verify ...passed 00:07:01.293 Test: blob_rw_verify_iov_nomem ...passed 00:07:01.293 Test: blob_rw_iov_read_only ...passed 00:07:01.293 Test: blob_xattr ...passed 00:07:01.293 Test: blob_dirty_shutdown ...passed 00:07:01.551 Test: blob_is_degraded ...passed 00:07:01.551 Suite: blob_esnap_bs_nocopy_noextent 00:07:01.551 Test: blob_esnap_create ...passed 00:07:01.551 Test: blob_esnap_thread_add_remove ...passed 00:07:01.551 Test: blob_esnap_clone_snapshot ...passed 00:07:01.551 Test: blob_esnap_clone_inflate ...passed 00:07:01.551 Test: blob_esnap_clone_decouple ...passed 00:07:01.551 Test: blob_esnap_clone_reload ...passed 00:07:01.810 Test: blob_esnap_hotplug ...passed 00:07:01.810 Suite: blob_nocopy_extent 00:07:01.810 Test: blob_init ...[2024-11-05 16:45:50.454059] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:01.810 passed 00:07:01.810 Test: blob_thin_provision ...passed 00:07:01.810 Test: blob_read_only ...passed 00:07:01.810 Test: bs_load ...[2024-11-05 16:45:50.501690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:01.810 passed 00:07:01.810 Test: bs_load_custom_cluster_size ...passed 00:07:01.810 Test: bs_load_after_failed_grow ...passed 00:07:01.810 Test: bs_cluster_sz ...[2024-11-05 16:45:50.528677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:01.810 [2024-11-05 16:45:50.528966] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:01.810 [2024-11-05 16:45:50.529053] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:01.810 passed 00:07:01.810 Test: bs_resize_md ...passed 00:07:01.810 Test: bs_destroy ...passed 00:07:01.810 Test: bs_type ...passed 00:07:01.810 Test: bs_super_block ...passed 00:07:01.810 Test: bs_test_recover_cluster_count ...passed 00:07:01.810 Test: bs_grow_live ...passed 00:07:01.810 Test: bs_grow_live_no_space ...passed 00:07:01.810 Test: bs_test_grow ...passed 00:07:01.810 Test: blob_serialize_test ...passed 00:07:01.810 Test: super_block_crc ...passed 00:07:01.810 Test: blob_thin_prov_write_count_io ...passed 00:07:01.810 Test: bs_load_iter_test ...passed 00:07:01.810 Test: blob_relations ...[2024-11-05 16:45:50.686459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.810 [2024-11-05 16:45:50.686606] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.810 [2024-11-05 16:45:50.687579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.810 [2024-11-05 16:45:50.687676] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.810 passed 00:07:02.068 Test: blob_relations2 ...[2024-11-05 16:45:50.701617] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:02.068 [2024-11-05 16:45:50.701717] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.068 [2024-11-05 16:45:50.701763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:02.068 [2024-11-05 16:45:50.701794] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.068 [2024-11-05 16:45:50.703214] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:02.068 [2024-11-05 16:45:50.703302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.068 [2024-11-05 16:45:50.703734] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:02.068 [2024-11-05 16:45:50.703800] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.068 passed 00:07:02.068 Test: blob_relations3 ...passed 00:07:02.068 Test: blobstore_clean_power_failure ...passed 00:07:02.068 Test: blob_delete_snapshot_power_failure ...[2024-11-05 16:45:50.855673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:02.068 [2024-11-05 16:45:50.867991] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:02.068 [2024-11-05 16:45:50.880714] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:02.068 [2024-11-05 16:45:50.880820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:02.068 [2024-11-05 16:45:50.880869] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.068 [2024-11-05 16:45:50.893236] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:02.068 [2024-11-05 16:45:50.893338] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:02.068 [2024-11-05 16:45:50.893391] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:02.068 [2024-11-05 16:45:50.893420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.068 [2024-11-05 16:45:50.905656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:02.068 [2024-11-05 16:45:50.905763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:02.068 [2024-11-05 16:45:50.905819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:02.068 [2024-11-05 16:45:50.905869] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.068 [2024-11-05 16:45:50.918431] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:02.068 [2024-11-05 16:45:50.918582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.068 [2024-11-05 16:45:50.930928] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:02.068 [2024-11-05 16:45:50.931058] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.068 [2024-11-05 16:45:50.943467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:02.068 [2024-11-05 16:45:50.943594] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.327 passed 00:07:02.327 Test: blob_create_snapshot_power_failure ...[2024-11-05 16:45:50.979673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:02.327 [2024-11-05 16:45:50.991390] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:02.327 [2024-11-05 16:45:51.014486] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:02.327 [2024-11-05 16:45:51.026807] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:02.327 passed 00:07:02.327 Test: blob_io_unit ...passed 00:07:02.327 Test: blob_io_unit_compatibility ...passed 00:07:02.327 Test: blob_ext_md_pages ...passed 00:07:02.327 Test: blob_esnap_io_4096_4096 ...passed 00:07:02.327 Test: blob_esnap_io_512_512 ...passed 00:07:02.327 Test: blob_esnap_io_4096_512 ...passed 00:07:02.586 Test: blob_esnap_io_512_4096 ...passed 00:07:02.586 Suite: blob_bs_nocopy_extent 00:07:02.586 Test: blob_open ...passed 00:07:02.586 Test: blob_create ...[2024-11-05 16:45:51.271017] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:02.586 passed 00:07:02.586 Test: blob_create_loop ...passed 00:07:02.586 Test: blob_create_fail ...[2024-11-05 16:45:51.369054] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:02.586 passed 00:07:02.586 Test: blob_create_internal ...passed 00:07:02.586 Test: blob_create_zero_extent ...passed 00:07:02.845 Test: blob_snapshot ...passed 00:07:02.845 Test: blob_clone ...passed 00:07:02.845 Test: blob_inflate ...[2024-11-05 16:45:51.583701] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:02.845 passed 00:07:02.845 Test: blob_delete ...passed 00:07:02.845 Test: blob_resize_test ...[2024-11-05 16:45:51.664494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:02.845 passed 00:07:02.845 Test: channel_ops ...passed 00:07:03.103 Test: blob_super ...passed 00:07:03.103 Test: blob_rw_verify_iov ...passed 00:07:03.103 Test: blob_unmap ...passed 00:07:03.103 Test: blob_iter ...passed 00:07:03.103 Test: blob_parse_md ...passed 00:07:03.103 Test: bs_load_pending_removal ...passed 00:07:03.104 Test: bs_unload ...[2024-11-05 16:45:51.985789] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:03.362 passed 00:07:03.362 Test: bs_usable_clusters ...passed 00:07:03.362 Test: blob_crc ...[2024-11-05 16:45:52.068224] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:03.362 [2024-11-05 16:45:52.068387] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:03.362 passed 00:07:03.362 Test: blob_flags ...passed 00:07:03.362 Test: bs_version ...passed 00:07:03.362 Test: blob_set_xattrs_test ...[2024-11-05 16:45:52.187034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:03.362 [2024-11-05 16:45:52.187162] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:03.362 passed 00:07:03.621 Test: blob_thin_prov_alloc ...passed 00:07:03.621 Test: blob_insert_cluster_msg_test ...passed 00:07:03.621 Test: blob_thin_prov_rw ...passed 00:07:03.621 Test: blob_thin_prov_rle ...passed 00:07:03.621 Test: blob_thin_prov_rw_iov ...passed 00:07:03.621 Test: blob_snapshot_rw ...passed 00:07:03.879 Test: blob_snapshot_rw_iov ...passed 00:07:03.879 Test: blob_inflate_rw ...passed 00:07:03.879 Test: blob_snapshot_freeze_io ...passed 00:07:04.137 Test: blob_operation_split_rw ...passed 00:07:04.137 Test: blob_operation_split_rw_iov ...passed 00:07:04.396 Test: blob_simultaneous_operations ...[2024-11-05 16:45:53.040829] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:04.396 [2024-11-05 16:45:53.040939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.396 [2024-11-05 16:45:53.042049] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:04.396 [2024-11-05 16:45:53.042095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.396 [2024-11-05 16:45:53.051777] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:04.396 [2024-11-05 16:45:53.051839] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.396 [2024-11-05 16:45:53.051956] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:04.396 [2024-11-05 16:45:53.051981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.396 passed 00:07:04.396 Test: blob_persist_test ...passed 00:07:04.396 Test: blob_decouple_snapshot ...passed 00:07:04.396 Test: blob_seek_io_unit ...passed 00:07:04.396 Test: blob_nested_freezes ...passed 00:07:04.396 Suite: blob_blob_nocopy_extent 00:07:04.396 Test: blob_write ...passed 00:07:04.655 Test: blob_read ...passed 00:07:04.655 Test: blob_rw_verify ...passed 00:07:04.655 Test: blob_rw_verify_iov_nomem ...passed 00:07:04.655 Test: blob_rw_iov_read_only ...passed 00:07:04.655 Test: blob_xattr ...passed 00:07:04.655 Test: blob_dirty_shutdown ...passed 00:07:04.655 Test: blob_is_degraded ...passed 00:07:04.655 Suite: blob_esnap_bs_nocopy_extent 00:07:04.655 Test: blob_esnap_create ...passed 00:07:04.914 Test: blob_esnap_thread_add_remove ...passed 00:07:04.914 Test: blob_esnap_clone_snapshot ...passed 00:07:04.914 Test: blob_esnap_clone_inflate ...passed 00:07:04.914 Test: blob_esnap_clone_decouple ...passed 00:07:04.914 Test: blob_esnap_clone_reload ...passed 00:07:04.914 Test: blob_esnap_hotplug ...passed 00:07:04.914 Suite: blob_copy_noextent 00:07:04.914 Test: blob_init ...[2024-11-05 16:45:53.698830] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:04.914 passed 00:07:04.914 Test: blob_thin_provision ...passed 00:07:04.914 Test: blob_read_only ...passed 00:07:04.914 Test: bs_load ...[2024-11-05 16:45:53.741108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:04.914 passed 00:07:04.914 Test: bs_load_custom_cluster_size ...passed 00:07:04.914 Test: bs_load_after_failed_grow ...passed 00:07:04.914 Test: bs_cluster_sz ...[2024-11-05 16:45:53.764168] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:04.914 [2024-11-05 16:45:53.764372] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:04.914 [2024-11-05 16:45:53.764417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:04.914 passed 00:07:04.914 Test: bs_resize_md ...passed 00:07:04.914 Test: bs_destroy ...passed 00:07:05.172 Test: bs_type ...passed 00:07:05.172 Test: bs_super_block ...passed 00:07:05.172 Test: bs_test_recover_cluster_count ...passed 00:07:05.172 Test: bs_grow_live ...passed 00:07:05.172 Test: bs_grow_live_no_space ...passed 00:07:05.172 Test: bs_test_grow ...passed 00:07:05.172 Test: blob_serialize_test ...passed 00:07:05.172 Test: super_block_crc ...passed 00:07:05.172 Test: blob_thin_prov_write_count_io ...passed 00:07:05.172 Test: bs_load_iter_test ...passed 00:07:05.172 Test: blob_relations ...[2024-11-05 16:45:53.903857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.172 [2024-11-05 16:45:53.903971] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.172 [2024-11-05 16:45:53.904515] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.172 [2024-11-05 16:45:53.904561] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.172 passed 00:07:05.172 Test: blob_relations2 ...[2024-11-05 16:45:53.917009] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.172 [2024-11-05 16:45:53.917083] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.172 [2024-11-05 16:45:53.917124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.172 [2024-11-05 16:45:53.917139] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.173 [2024-11-05 16:45:53.918002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.173 [2024-11-05 16:45:53.918116] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.173 [2024-11-05 16:45:53.918406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:05.173 [2024-11-05 16:45:53.918455] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.173 passed 00:07:05.173 Test: blob_relations3 ...passed 00:07:05.173 Test: blobstore_clean_power_failure ...passed 00:07:05.433 Test: blob_delete_snapshot_power_failure ...[2024-11-05 16:45:54.061160] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:05.433 [2024-11-05 16:45:54.072505] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:05.433 [2024-11-05 16:45:54.072589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:05.433 [2024-11-05 16:45:54.072633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.433 [2024-11-05 16:45:54.083810] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:05.433 [2024-11-05 16:45:54.083888] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:05.433 [2024-11-05 16:45:54.083935] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:05.433 [2024-11-05 16:45:54.083958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.433 [2024-11-05 16:45:54.095158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:05.433 [2024-11-05 16:45:54.095303] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.433 [2024-11-05 16:45:54.106541] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:05.433 [2024-11-05 16:45:54.106661] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.433 [2024-11-05 16:45:54.118496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:05.433 [2024-11-05 16:45:54.118592] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.433 passed 00:07:05.433 Test: blob_create_snapshot_power_failure ...[2024-11-05 16:45:54.155553] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:05.433 [2024-11-05 16:45:54.177946] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:05.433 [2024-11-05 16:45:54.189289] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:05.433 passed 00:07:05.433 Test: blob_io_unit ...passed 00:07:05.433 Test: blob_io_unit_compatibility ...passed 00:07:05.433 Test: blob_ext_md_pages ...passed 00:07:05.433 Test: blob_esnap_io_4096_4096 ...passed 00:07:05.433 Test: blob_esnap_io_512_512 ...passed 00:07:05.705 Test: blob_esnap_io_4096_512 ...passed 00:07:05.705 Test: blob_esnap_io_512_4096 ...passed 00:07:05.705 Suite: blob_bs_copy_noextent 00:07:05.705 Test: blob_open ...passed 00:07:05.705 Test: blob_create ...[2024-11-05 16:45:54.422756] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:05.705 passed 00:07:05.705 Test: blob_create_loop ...passed 00:07:05.705 Test: blob_create_fail ...[2024-11-05 16:45:54.509416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:05.705 passed 00:07:05.705 Test: blob_create_internal ...passed 00:07:05.705 Test: blob_create_zero_extent ...passed 00:07:05.973 Test: blob_snapshot ...passed 00:07:05.973 Test: blob_clone ...passed 00:07:05.973 Test: blob_inflate ...[2024-11-05 16:45:54.670942] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:05.973 passed 00:07:05.973 Test: blob_delete ...passed 00:07:05.973 Test: blob_resize_test ...[2024-11-05 16:45:54.730887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:05.973 passed 00:07:05.973 Test: channel_ops ...passed 00:07:05.973 Test: blob_super ...passed 00:07:05.973 Test: blob_rw_verify_iov ...passed 00:07:06.232 Test: blob_unmap ...passed 00:07:06.232 Test: blob_iter ...passed 00:07:06.232 Test: blob_parse_md ...passed 00:07:06.232 Test: bs_load_pending_removal ...passed 00:07:06.232 Test: bs_unload ...[2024-11-05 16:45:54.975016] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:06.232 passed 00:07:06.232 Test: bs_usable_clusters ...passed 00:07:06.232 Test: blob_crc ...[2024-11-05 16:45:55.037977] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:06.232 [2024-11-05 16:45:55.038113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:06.232 passed 00:07:06.232 Test: blob_flags ...passed 00:07:06.232 Test: bs_version ...passed 00:07:06.490 Test: blob_set_xattrs_test ...[2024-11-05 16:45:55.137075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:06.490 [2024-11-05 16:45:55.137200] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:06.490 passed 00:07:06.490 Test: blob_thin_prov_alloc ...passed 00:07:06.490 Test: blob_insert_cluster_msg_test ...passed 00:07:06.490 Test: blob_thin_prov_rw ...passed 00:07:06.749 Test: blob_thin_prov_rle ...passed 00:07:06.749 Test: blob_thin_prov_rw_iov ...passed 00:07:06.749 Test: blob_snapshot_rw ...passed 00:07:06.749 Test: blob_snapshot_rw_iov ...passed 00:07:07.007 Test: blob_inflate_rw ...passed 00:07:07.007 Test: blob_snapshot_freeze_io ...passed 00:07:07.007 Test: blob_operation_split_rw ...passed 00:07:07.266 Test: blob_operation_split_rw_iov ...passed 00:07:07.266 Test: blob_simultaneous_operations ...[2024-11-05 16:45:55.974685] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:07.266 [2024-11-05 16:45:55.974779] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:07.266 [2024-11-05 16:45:55.975289] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:07.266 [2024-11-05 16:45:55.975321] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:07.266 [2024-11-05 16:45:55.977750] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:07.266 [2024-11-05 16:45:55.977796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:07.266 [2024-11-05 16:45:55.977892] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:07.266 [2024-11-05 16:45:55.977914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:07.266 passed 00:07:07.266 Test: blob_persist_test ...passed 00:07:07.266 Test: blob_decouple_snapshot ...passed 00:07:07.266 Test: blob_seek_io_unit ...passed 00:07:07.266 Test: blob_nested_freezes ...passed 00:07:07.266 Suite: blob_blob_copy_noextent 00:07:07.266 Test: blob_write ...passed 00:07:07.526 Test: blob_read ...passed 00:07:07.526 Test: blob_rw_verify ...passed 00:07:07.526 Test: blob_rw_verify_iov_nomem ...passed 00:07:07.526 Test: blob_rw_iov_read_only ...passed 00:07:07.526 Test: blob_xattr ...passed 00:07:07.526 Test: blob_dirty_shutdown ...passed 00:07:07.526 Test: blob_is_degraded ...passed 00:07:07.526 Suite: blob_esnap_bs_copy_noextent 00:07:07.526 Test: blob_esnap_create ...passed 00:07:07.785 Test: blob_esnap_thread_add_remove ...passed 00:07:07.785 Test: blob_esnap_clone_snapshot ...passed 00:07:07.785 Test: blob_esnap_clone_inflate ...passed 00:07:07.785 Test: blob_esnap_clone_decouple ...passed 00:07:07.785 Test: blob_esnap_clone_reload ...passed 00:07:07.785 Test: blob_esnap_hotplug ...passed 00:07:07.785 Suite: blob_copy_extent 00:07:07.786 Test: blob_init ...[2024-11-05 16:45:56.603262] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:07.786 passed 00:07:07.786 Test: blob_thin_provision ...passed 00:07:07.786 Test: blob_read_only ...passed 00:07:07.786 Test: bs_load ...[2024-11-05 16:45:56.650891] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:07.786 passed 00:07:07.786 Test: bs_load_custom_cluster_size ...passed 00:07:08.045 Test: bs_load_after_failed_grow ...passed 00:07:08.045 Test: bs_cluster_sz ...[2024-11-05 16:45:56.675260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:08.045 [2024-11-05 16:45:56.675476] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:08.045 [2024-11-05 16:45:56.675518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:08.045 passed 00:07:08.045 Test: bs_resize_md ...passed 00:07:08.045 Test: bs_destroy ...passed 00:07:08.045 Test: bs_type ...passed 00:07:08.045 Test: bs_super_block ...passed 00:07:08.045 Test: bs_test_recover_cluster_count ...passed 00:07:08.045 Test: bs_grow_live ...passed 00:07:08.045 Test: bs_grow_live_no_space ...passed 00:07:08.045 Test: bs_test_grow ...passed 00:07:08.045 Test: blob_serialize_test ...passed 00:07:08.045 Test: super_block_crc ...passed 00:07:08.045 Test: blob_thin_prov_write_count_io ...passed 00:07:08.045 Test: bs_load_iter_test ...passed 00:07:08.045 Test: blob_relations ...[2024-11-05 16:45:56.835578] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.045 [2024-11-05 16:45:56.835732] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.045 [2024-11-05 16:45:56.836779] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.045 [2024-11-05 16:45:56.836885] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.045 passed 00:07:08.045 Test: blob_relations2 ...[2024-11-05 16:45:56.853310] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.045 [2024-11-05 16:45:56.853419] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.045 [2024-11-05 16:45:56.853538] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.045 [2024-11-05 16:45:56.853565] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.045 [2024-11-05 16:45:56.854988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.045 [2024-11-05 16:45:56.855074] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.045 [2024-11-05 16:45:56.855560] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.045 [2024-11-05 16:45:56.855624] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.045 passed 00:07:08.045 Test: blob_relations3 ...passed 00:07:08.304 Test: blobstore_clean_power_failure ...passed 00:07:08.304 Test: blob_delete_snapshot_power_failure ...[2024-11-05 16:45:57.022927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:08.304 [2024-11-05 16:45:57.035938] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:08.304 [2024-11-05 16:45:57.048343] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:08.304 [2024-11-05 16:45:57.048471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:08.304 [2024-11-05 16:45:57.048519] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.304 [2024-11-05 16:45:57.063927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:08.304 [2024-11-05 16:45:57.064022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:08.304 [2024-11-05 16:45:57.064060] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:08.304 [2024-11-05 16:45:57.064083] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.304 [2024-11-05 16:45:57.077557] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:08.304 [2024-11-05 16:45:57.077662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:08.304 [2024-11-05 16:45:57.077716] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:08.304 [2024-11-05 16:45:57.077740] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.304 [2024-11-05 16:45:57.091814] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:08.304 [2024-11-05 16:45:57.091934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.304 [2024-11-05 16:45:57.105608] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:08.304 [2024-11-05 16:45:57.105764] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.304 [2024-11-05 16:45:57.120405] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:08.304 [2024-11-05 16:45:57.120573] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.304 passed 00:07:08.304 Test: blob_create_snapshot_power_failure ...[2024-11-05 16:45:57.160064] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:08.304 [2024-11-05 16:45:57.171778] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:08.563 [2024-11-05 16:45:57.197839] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:08.563 [2024-11-05 16:45:57.211909] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:08.563 passed 00:07:08.563 Test: blob_io_unit ...passed 00:07:08.563 Test: blob_io_unit_compatibility ...passed 00:07:08.563 Test: blob_ext_md_pages ...passed 00:07:08.563 Test: blob_esnap_io_4096_4096 ...passed 00:07:08.563 Test: blob_esnap_io_512_512 ...passed 00:07:08.563 Test: blob_esnap_io_4096_512 ...passed 00:07:08.563 Test: blob_esnap_io_512_4096 ...passed 00:07:08.563 Suite: blob_bs_copy_extent 00:07:08.821 Test: blob_open ...passed 00:07:08.821 Test: blob_create ...[2024-11-05 16:45:57.478474] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:08.821 passed 00:07:08.821 Test: blob_create_loop ...passed 00:07:08.821 Test: blob_create_fail ...[2024-11-05 16:45:57.585828] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:08.821 passed 00:07:08.821 Test: blob_create_internal ...passed 00:07:08.821 Test: blob_create_zero_extent ...passed 00:07:09.081 Test: blob_snapshot ...passed 00:07:09.081 Test: blob_clone ...passed 00:07:09.081 Test: blob_inflate ...[2024-11-05 16:45:57.771899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:09.081 passed 00:07:09.081 Test: blob_delete ...passed 00:07:09.081 Test: blob_resize_test ...[2024-11-05 16:45:57.844868] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:09.081 passed 00:07:09.081 Test: channel_ops ...passed 00:07:09.081 Test: blob_super ...passed 00:07:09.340 Test: blob_rw_verify_iov ...passed 00:07:09.340 Test: blob_unmap ...passed 00:07:09.340 Test: blob_iter ...passed 00:07:09.340 Test: blob_parse_md ...passed 00:07:09.340 Test: bs_load_pending_removal ...passed 00:07:09.340 Test: bs_unload ...[2024-11-05 16:45:58.145106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:09.340 passed 00:07:09.340 Test: bs_usable_clusters ...passed 00:07:09.340 Test: blob_crc ...[2024-11-05 16:45:58.223052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:09.340 [2024-11-05 16:45:58.223237] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:09.598 passed 00:07:09.598 Test: blob_flags ...passed 00:07:09.598 Test: bs_version ...passed 00:07:09.598 Test: blob_set_xattrs_test ...[2024-11-05 16:45:58.338066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:09.598 [2024-11-05 16:45:58.338235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:09.598 passed 00:07:09.598 Test: blob_thin_prov_alloc ...passed 00:07:09.857 Test: blob_insert_cluster_msg_test ...passed 00:07:09.857 Test: blob_thin_prov_rw ...passed 00:07:09.857 Test: blob_thin_prov_rle ...passed 00:07:09.857 Test: blob_thin_prov_rw_iov ...passed 00:07:09.857 Test: blob_snapshot_rw ...passed 00:07:09.857 Test: blob_snapshot_rw_iov ...passed 00:07:10.115 Test: blob_inflate_rw ...passed 00:07:10.115 Test: blob_snapshot_freeze_io ...passed 00:07:10.374 Test: blob_operation_split_rw ...passed 00:07:10.633 Test: blob_operation_split_rw_iov ...passed 00:07:10.633 Test: blob_simultaneous_operations ...[2024-11-05 16:45:59.317171] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:10.633 [2024-11-05 16:45:59.317302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:10.633 [2024-11-05 16:45:59.317808] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:10.633 [2024-11-05 16:45:59.317849] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:10.633 [2024-11-05 16:45:59.320863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:10.633 [2024-11-05 16:45:59.320916] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:10.633 [2024-11-05 16:45:59.321021] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:10.633 [2024-11-05 16:45:59.321050] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:10.633 passed 00:07:10.633 Test: blob_persist_test ...passed 00:07:10.633 Test: blob_decouple_snapshot ...passed 00:07:10.633 Test: blob_seek_io_unit ...passed 00:07:10.633 Test: blob_nested_freezes ...passed 00:07:10.633 Suite: blob_blob_copy_extent 00:07:10.891 Test: blob_write ...passed 00:07:10.891 Test: blob_read ...passed 00:07:10.891 Test: blob_rw_verify ...passed 00:07:10.891 Test: blob_rw_verify_iov_nomem ...passed 00:07:10.891 Test: blob_rw_iov_read_only ...passed 00:07:10.891 Test: blob_xattr ...passed 00:07:11.153 Test: blob_dirty_shutdown ...passed 00:07:11.153 Test: blob_is_degraded ...passed 00:07:11.153 Suite: blob_esnap_bs_copy_extent 00:07:11.153 Test: blob_esnap_create ...passed 00:07:11.153 Test: blob_esnap_thread_add_remove ...passed 00:07:11.153 Test: blob_esnap_clone_snapshot ...passed 00:07:11.153 Test: blob_esnap_clone_inflate ...passed 00:07:11.153 Test: blob_esnap_clone_decouple ...passed 00:07:11.414 Test: blob_esnap_clone_reload ...passed 00:07:11.414 Test: blob_esnap_hotplug ...passed 00:07:11.414 00:07:11.414 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.414 suites 16 16 n/a 0 0 00:07:11.414 tests 348 348 348 0 0 00:07:11.414 asserts 92605 92605 92605 0 n/a 00:07:11.414 00:07:11.414 Elapsed time = 13.165 seconds 00:07:11.414 16:46:00 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:07:11.414 00:07:11.414 00:07:11.414 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.414 http://cunit.sourceforge.net/ 00:07:11.414 00:07:11.414 00:07:11.414 Suite: blob_bdev 00:07:11.414 Test: create_bs_dev ...passed 00:07:11.414 Test: create_bs_dev_ro ...[2024-11-05 16:46:00.203751] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:07:11.414 passed 00:07:11.414 Test: create_bs_dev_rw ...passed 00:07:11.414 Test: claim_bs_dev ...[2024-11-05 16:46:00.204243] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:07:11.414 passed 00:07:11.414 Test: claim_bs_dev_ro ...passed 00:07:11.414 Test: deferred_destroy_refs ...passed 00:07:11.414 Test: deferred_destroy_channels ...passed 00:07:11.414 Test: deferred_destroy_threads ...passed 00:07:11.414 00:07:11.414 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.414 suites 1 1 n/a 0 0 00:07:11.414 tests 8 8 8 0 0 00:07:11.414 asserts 119 119 119 0 n/a 00:07:11.414 00:07:11.414 Elapsed time = 0.001 seconds 00:07:11.414 16:46:00 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:07:11.414 00:07:11.414 00:07:11.414 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.414 http://cunit.sourceforge.net/ 00:07:11.414 00:07:11.414 00:07:11.414 Suite: tree 00:07:11.414 Test: blobfs_tree_op_test ...passed 00:07:11.414 00:07:11.414 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.414 suites 1 1 n/a 0 0 00:07:11.414 tests 1 1 1 0 0 00:07:11.414 asserts 27 27 27 0 n/a 00:07:11.415 00:07:11.415 Elapsed time = 0.000 seconds 00:07:11.415 16:46:00 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:07:11.415 00:07:11.415 00:07:11.415 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.415 http://cunit.sourceforge.net/ 00:07:11.415 00:07:11.415 00:07:11.415 Suite: blobfs_async_ut 00:07:11.673 Test: fs_init ...passed 00:07:11.673 Test: fs_open ...passed 00:07:11.673 Test: fs_create ...passed 00:07:11.673 Test: fs_truncate ...passed 00:07:11.673 Test: fs_rename ...[2024-11-05 16:46:00.407900] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:07:11.673 passed 00:07:11.673 Test: fs_rw_async ...passed 00:07:11.673 Test: fs_writev_readv_async ...passed 00:07:11.673 Test: tree_find_buffer_ut ...passed 00:07:11.673 Test: channel_ops ...passed 00:07:11.673 Test: channel_ops_sync ...passed 00:07:11.673 00:07:11.673 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.673 suites 1 1 n/a 0 0 00:07:11.673 tests 10 10 10 0 0 00:07:11.674 asserts 292 292 292 0 n/a 00:07:11.674 00:07:11.674 Elapsed time = 0.193 seconds 00:07:11.674 16:46:00 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:07:11.674 00:07:11.674 00:07:11.674 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.674 http://cunit.sourceforge.net/ 00:07:11.674 00:07:11.674 00:07:11.674 Suite: blobfs_sync_ut 00:07:11.932 Test: cache_read_after_write ...[2024-11-05 16:46:00.609434] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:07:11.932 passed 00:07:11.932 Test: file_length ...passed 00:07:11.932 Test: append_write_to_extend_blob ...passed 00:07:11.932 Test: partial_buffer ...passed 00:07:11.932 Test: cache_write_null_buffer ...passed 00:07:11.932 Test: fs_create_sync ...passed 00:07:11.932 Test: fs_rename_sync ...passed 00:07:11.932 Test: cache_append_no_cache ...passed 00:07:11.932 Test: fs_delete_file_without_close ...passed 00:07:11.932 00:07:11.932 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.932 suites 1 1 n/a 0 0 00:07:11.932 tests 9 9 9 0 0 00:07:11.932 asserts 345 345 345 0 n/a 00:07:11.932 00:07:11.932 Elapsed time = 0.388 seconds 00:07:11.932 16:46:00 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:07:11.932 00:07:11.932 00:07:11.932 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.932 http://cunit.sourceforge.net/ 00:07:11.932 00:07:11.932 00:07:11.932 Suite: blobfs_bdev_ut 00:07:11.932 Test: spdk_blobfs_bdev_detect_test ...[2024-11-05 16:46:00.797087] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:11.932 passed 00:07:11.933 Test: spdk_blobfs_bdev_create_test ...passed 00:07:11.933 Test: spdk_blobfs_bdev_mount_test ...passed[2024-11-05 16:46:00.797572] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:11.933 00:07:11.933 00:07:11.933 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.933 suites 1 1 n/a 0 0 00:07:11.933 tests 3 3 3 0 0 00:07:11.933 asserts 9 9 9 0 n/a 00:07:11.933 00:07:11.933 Elapsed time = 0.001 seconds 00:07:11.933 00:07:11.933 real 0m13.945s 00:07:11.933 user 0m13.289s 00:07:11.933 sys 0m0.831s 00:07:11.933 16:46:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.933 16:46:00 -- common/autotest_common.sh@10 -- # set +x 00:07:11.933 ************************************ 00:07:11.933 END TEST unittest_blob_blobfs 00:07:11.933 ************************************ 00:07:12.192 16:46:00 -- unit/unittest.sh@208 -- # run_test unittest_event unittest_event 00:07:12.192 16:46:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.192 16:46:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.192 16:46:00 -- common/autotest_common.sh@10 -- # set +x 00:07:12.192 ************************************ 00:07:12.192 START TEST unittest_event 00:07:12.192 ************************************ 00:07:12.192 16:46:00 -- common/autotest_common.sh@1114 -- # unittest_event 00:07:12.192 16:46:00 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:07:12.192 00:07:12.192 00:07:12.192 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.192 http://cunit.sourceforge.net/ 00:07:12.192 00:07:12.192 00:07:12.192 Suite: app_suite 00:07:12.192 Test: test_spdk_app_parse_args ...app_ut [options] 00:07:12.192 options: 00:07:12.192 -c, --config JSON config file (default none) 00:07:12.192 --json JSON config file (default none) 00:07:12.192 --json-ignore-init-errors 00:07:12.192 don't exit on invalid config entry 00:07:12.192 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:12.192 -g, --single-file-segments 00:07:12.192 force creating just one hugetlbfs file 00:07:12.192 -h, --help show this usage 00:07:12.192 -i, --shm-id shared memory ID (optional) 00:07:12.192 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:12.192 --lcores lcore to CPU mapping list. The list is in the format: 00:07:12.192 [<,lcores[@CPUs]>...] 00:07:12.192 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:12.192 Within the group, '-' is used for range separator, 00:07:12.192 ',' is used for single number separator. 00:07:12.192 '( )' can be omitted for single element group, 00:07:12.192 '@' can be omitted if cpus and lcores have the same value 00:07:12.192 -n, --mem-channels channel number of memory channels used for DPDK 00:07:12.192 -p, --main-core main (primary) core for DPDK 00:07:12.192 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:12.192 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:12.192 --disable-cpumask-locks Disable CPU core lock files. 00:07:12.192 --silence-noticelog disable notice level logging to stderr 00:07:12.192 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:12.192 -u, --no-pci disable PCI access 00:07:12.193 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:12.193 --max-delay maximum reactor delay (in microseconds) 00:07:12.193 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:12.193 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:12.193 -R, --huge-unlink unlink huge files after initialization 00:07:12.193 -v, --version print SPDK version 00:07:12.193 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:12.193 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:12.193 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:12.193 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:12.193 Tracepoints vary in size and can use more than one trace entry. 00:07:12.193 --rpcs-allowed comma-separated list of permitted RPCS 00:07:12.193 --env-context Opaque context for use of the env implementation 00:07:12.193 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:12.193 --no-huge run without using hugepages 00:07:12.193 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:12.193 -e, --tpoint-group [:] 00:07:12.193 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:12.193 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:12.193 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:12.193 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:12.193 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:12.193 app_ut: invalid option -- 'z' 00:07:12.193 app_ut [options] 00:07:12.193 options: 00:07:12.193 -c, --config JSON config file (default none) 00:07:12.193 --json JSON config file (default none) 00:07:12.193 --json-ignore-init-errors 00:07:12.193 don't exit on invalid config entry 00:07:12.193 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:12.193 -g, --single-file-segments 00:07:12.193 force creating just one hugetlbfs file 00:07:12.193 -h, --help show this usage 00:07:12.193 -i, --shm-id shared memory ID (optional) 00:07:12.193 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:12.193 --lcores lcore to CPU mapping list. The list is in the format: 00:07:12.193 [<,lcores[@CPUs]>...] 00:07:12.193 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'app_ut: unrecognized option '--test-long-opt' 00:07:12.193 00:07:12.193 Within the group, '-' is used for range separator, 00:07:12.193 ',' is used for single number separator. 00:07:12.193 '( )' can be omitted for single element group, 00:07:12.193 '@' can be omitted if cpus and lcores have the same value 00:07:12.193 -n, --mem-channels channel number of memory channels used for DPDK 00:07:12.193 -p, --main-core main (primary) core for DPDK 00:07:12.193 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:12.193 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:12.193 --disable-cpumask-locks Disable CPU core lock files. 00:07:12.193 --silence-noticelog disable notice level logging to stderr 00:07:12.193 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:12.193 -u, --no-pci disable PCI access 00:07:12.193 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:12.193 --max-delay maximum reactor delay (in microseconds) 00:07:12.193 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:12.193 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:12.193 -R, --huge-unlink unlink huge files after initialization 00:07:12.193 -v, --version print SPDK version 00:07:12.193 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:12.193 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:12.193 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:12.193 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:12.193 Tracepoints vary in size and can use more than one trace entry. 00:07:12.193 --rpcs-allowed comma-separated list of permitted RPCS 00:07:12.193 --env-context Opaque context for use of the env implementation 00:07:12.193 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:12.193 --no-huge run without using hugepages 00:07:12.193 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:12.193 -e, --tpoint-group [:] 00:07:12.193 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:12.193 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:12.193 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:12.193 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:12.193 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:12.193 [2024-11-05 16:46:00.876447] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:07:12.193 app_ut [options] 00:07:12.193 options: 00:07:12.193 -c, --config JSON config file (default none) 00:07:12.193 --json JSON config file (default none) 00:07:12.193 --json-ignore-init-errors 00:07:12.193 don't exit on invalid config entry 00:07:12.193 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:12.193 -g, --single-file-segments 00:07:12.193 force creating just one hugetlbfs file 00:07:12.193 -h, --help show this usage 00:07:12.193 -i, --shm-id shared memory ID (optional) 00:07:12.193 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:12.193 --lcores lcore to CPU mapping list. The list is in the format: 00:07:12.193 [<,lcores[@CPUs]>...] 00:07:12.193 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:12.193 Within the group, '-' is used for range separator, 00:07:12.193 ',' is used for single number separator. 00:07:12.193 '( )' can be omitted for single element group, 00:07:12.193 '@' can be omitted if cpus and lcores have the same value 00:07:12.193 -n, --mem-channels channel number of memory channels used for DPDK 00:07:12.193 -p, --main-core main (primary) core for DPDK 00:07:12.193 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:12.193 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:12.193 --disable-cpumask-locks Disable CPU core lock files. 00:07:12.193 --silence-noticelog disable notice level logging to stderr 00:07:12.193 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:12.193 -u, --no-pci disable PCI access 00:07:12.193 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:12.193 --max-delay maximum reactor delay (in microseconds) 00:07:12.193 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:12.193 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:12.193 -R, --huge-unlink unlink huge files after initialization 00:07:12.193 -v, --version print SPDK version 00:07:12.193 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:12.193 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:12.193 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:12.193 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:12.193 Tracepoints vary in size and can use more than one trace entry. 00:07:12.193 --rpcs-allowed comma-separated list of permitted RPCS 00:07:12.193 --env-context Opaque context for use of the env implementation 00:07:12.193 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:12.193 --no-huge run without using hugepages 00:07:12.193 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:12.193 -e, --tpoint-group [:] 00:07:12.193 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:12.193 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:12.193 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:12.193 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:12.194 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:12.194 passed 00:07:12.194 00:07:12.194 [2024-11-05 16:46:00.876690] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:07:12.194 [2024-11-05 16:46:00.876863] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:07:12.194 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.194 suites 1 1 n/a 0 0 00:07:12.194 tests 1 1 1 0 0 00:07:12.194 asserts 8 8 8 0 n/a 00:07:12.194 00:07:12.194 Elapsed time = 0.001 seconds 00:07:12.194 16:46:00 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:07:12.194 00:07:12.194 00:07:12.194 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.194 http://cunit.sourceforge.net/ 00:07:12.194 00:07:12.194 00:07:12.194 Suite: app_suite 00:07:12.194 Test: test_create_reactor ...passed 00:07:12.194 Test: test_init_reactors ...passed 00:07:12.194 Test: test_event_call ...passed 00:07:12.194 Test: test_schedule_thread ...passed 00:07:12.194 Test: test_reschedule_thread ...passed 00:07:12.194 Test: test_bind_thread ...passed 00:07:12.194 Test: test_for_each_reactor ...passed 00:07:12.194 Test: test_reactor_stats ...passed 00:07:12.194 Test: test_scheduler ...passed 00:07:12.194 Test: test_governor ...passed 00:07:12.194 00:07:12.194 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.194 suites 1 1 n/a 0 0 00:07:12.194 tests 10 10 10 0 0 00:07:12.194 asserts 344 344 344 0 n/a 00:07:12.194 00:07:12.194 Elapsed time = 0.019 seconds 00:07:12.194 00:07:12.194 real 0m0.089s 00:07:12.194 user 0m0.063s 00:07:12.194 sys 0m0.027s 00:07:12.194 16:46:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.194 16:46:00 -- common/autotest_common.sh@10 -- # set +x 00:07:12.194 ************************************ 00:07:12.194 END TEST unittest_event 00:07:12.194 ************************************ 00:07:12.194 16:46:00 -- unit/unittest.sh@209 -- # uname -s 00:07:12.194 16:46:00 -- unit/unittest.sh@209 -- # '[' Linux = Linux ']' 00:07:12.194 16:46:00 -- unit/unittest.sh@210 -- # run_test unittest_ftl unittest_ftl 00:07:12.194 16:46:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.194 16:46:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.194 16:46:00 -- common/autotest_common.sh@10 -- # set +x 00:07:12.194 ************************************ 00:07:12.194 START TEST unittest_ftl 00:07:12.194 ************************************ 00:07:12.194 16:46:01 -- common/autotest_common.sh@1114 -- # unittest_ftl 00:07:12.194 16:46:01 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:07:12.194 00:07:12.194 00:07:12.194 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.194 http://cunit.sourceforge.net/ 00:07:12.194 00:07:12.194 00:07:12.194 Suite: ftl_band_suite 00:07:12.194 Test: test_band_block_offset_from_addr_base ...passed 00:07:12.452 Test: test_band_block_offset_from_addr_offset ...passed 00:07:12.452 Test: test_band_addr_from_block_offset ...passed 00:07:12.452 Test: test_band_set_addr ...passed 00:07:12.452 Test: test_invalidate_addr ...passed 00:07:12.452 Test: test_next_xfer_addr ...passed 00:07:12.452 00:07:12.452 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.452 suites 1 1 n/a 0 0 00:07:12.452 tests 6 6 6 0 0 00:07:12.452 asserts 30356 30356 30356 0 n/a 00:07:12.452 00:07:12.452 Elapsed time = 0.181 seconds 00:07:12.452 16:46:01 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:07:12.452 00:07:12.452 00:07:12.452 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.452 http://cunit.sourceforge.net/ 00:07:12.452 00:07:12.452 00:07:12.452 Suite: ftl_bitmap 00:07:12.452 Test: test_ftl_bitmap_create ...[2024-11-05 16:46:01.264992] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:07:12.452 passed 00:07:12.452 Test: test_ftl_bitmap_get ...[2024-11-05 16:46:01.265280] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:07:12.452 passed 00:07:12.452 Test: test_ftl_bitmap_set ...passed 00:07:12.452 Test: test_ftl_bitmap_clear ...passed 00:07:12.452 Test: test_ftl_bitmap_find_first_set ...passed 00:07:12.452 Test: test_ftl_bitmap_find_first_clear ...passed 00:07:12.452 Test: test_ftl_bitmap_count_set ...passed 00:07:12.452 00:07:12.453 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.453 suites 1 1 n/a 0 0 00:07:12.453 tests 7 7 7 0 0 00:07:12.453 asserts 137 137 137 0 n/a 00:07:12.453 00:07:12.453 Elapsed time = 0.001 seconds 00:07:12.453 16:46:01 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:07:12.453 00:07:12.453 00:07:12.453 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.453 http://cunit.sourceforge.net/ 00:07:12.453 00:07:12.453 00:07:12.453 Suite: ftl_io_suite 00:07:12.453 Test: test_completion ...passed 00:07:12.453 Test: test_multiple_ios ...passed 00:07:12.453 00:07:12.453 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.453 suites 1 1 n/a 0 0 00:07:12.453 tests 2 2 2 0 0 00:07:12.453 asserts 47 47 47 0 n/a 00:07:12.453 00:07:12.453 Elapsed time = 0.003 seconds 00:07:12.453 16:46:01 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:07:12.453 00:07:12.453 00:07:12.453 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.453 http://cunit.sourceforge.net/ 00:07:12.453 00:07:12.453 00:07:12.453 Suite: ftl_mngt 00:07:12.453 Test: test_next_step ...passed 00:07:12.453 Test: test_continue_step ...passed 00:07:12.453 Test: test_get_func_and_step_cntx_alloc ...passed 00:07:12.453 Test: test_fail_step ...passed 00:07:12.453 Test: test_mngt_call_and_call_rollback ...passed 00:07:12.453 Test: test_nested_process_failure ...passed 00:07:12.453 00:07:12.453 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.453 suites 1 1 n/a 0 0 00:07:12.453 tests 6 6 6 0 0 00:07:12.453 asserts 176 176 176 0 n/a 00:07:12.453 00:07:12.453 Elapsed time = 0.001 seconds 00:07:12.713 16:46:01 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:07:12.713 00:07:12.713 00:07:12.713 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.713 http://cunit.sourceforge.net/ 00:07:12.713 00:07:12.713 00:07:12.713 Suite: ftl_mempool 00:07:12.713 Test: test_ftl_mempool_create ...passed 00:07:12.713 Test: test_ftl_mempool_get_put ...passed 00:07:12.713 00:07:12.713 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.713 suites 1 1 n/a 0 0 00:07:12.713 tests 2 2 2 0 0 00:07:12.713 asserts 36 36 36 0 n/a 00:07:12.713 00:07:12.713 Elapsed time = 0.000 seconds 00:07:12.713 16:46:01 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:07:12.713 00:07:12.713 00:07:12.713 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.713 http://cunit.sourceforge.net/ 00:07:12.713 00:07:12.713 00:07:12.713 Suite: ftl_addr64_suite 00:07:12.713 Test: test_addr_cached ...passed 00:07:12.713 00:07:12.713 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.713 suites 1 1 n/a 0 0 00:07:12.713 tests 1 1 1 0 0 00:07:12.713 asserts 1536 1536 1536 0 n/a 00:07:12.713 00:07:12.713 Elapsed time = 0.000 seconds 00:07:12.713 16:46:01 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:07:12.713 00:07:12.713 00:07:12.713 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.713 http://cunit.sourceforge.net/ 00:07:12.713 00:07:12.713 00:07:12.713 Suite: ftl_sb 00:07:12.713 Test: test_sb_crc_v2 ...passed 00:07:12.713 Test: test_sb_crc_v3 ...passed 00:07:12.713 Test: test_sb_v3_md_layout ...[2024-11-05 16:46:01.414284] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:07:12.713 [2024-11-05 16:46:01.414688] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:12.713 [2024-11-05 16:46:01.414766] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:12.713 [2024-11-05 16:46:01.414830] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:12.713 [2024-11-05 16:46:01.414895] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:12.713 [2024-11-05 16:46:01.415024] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:07:12.713 [2024-11-05 16:46:01.415071] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:12.713 [2024-11-05 16:46:01.415139] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:12.713 [2024-11-05 16:46:01.415288] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:12.713 [2024-11-05 16:46:01.415374] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:12.713 [2024-11-05 16:46:01.415449] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:12.713 passed 00:07:12.713 Test: test_sb_v5_md_layout ...passed 00:07:12.713 00:07:12.713 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.713 suites 1 1 n/a 0 0 00:07:12.713 tests 4 4 4 0 0 00:07:12.713 asserts 148 148 148 0 n/a 00:07:12.713 00:07:12.713 Elapsed time = 0.003 seconds 00:07:12.713 16:46:01 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:07:12.713 00:07:12.713 00:07:12.713 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.713 http://cunit.sourceforge.net/ 00:07:12.713 00:07:12.713 00:07:12.713 Suite: ftl_layout_upgrade 00:07:12.713 Test: test_l2p_upgrade ...passed 00:07:12.713 00:07:12.713 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.713 suites 1 1 n/a 0 0 00:07:12.713 tests 1 1 1 0 0 00:07:12.713 asserts 140 140 140 0 n/a 00:07:12.713 00:07:12.713 Elapsed time = 0.001 seconds 00:07:12.713 00:07:12.713 real 0m0.455s 00:07:12.713 user 0m0.226s 00:07:12.713 sys 0m0.232s 00:07:12.713 16:46:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.713 16:46:01 -- common/autotest_common.sh@10 -- # set +x 00:07:12.713 ************************************ 00:07:12.713 END TEST unittest_ftl 00:07:12.713 ************************************ 00:07:12.713 16:46:01 -- unit/unittest.sh@213 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:12.713 16:46:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.713 16:46:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.713 16:46:01 -- common/autotest_common.sh@10 -- # set +x 00:07:12.713 ************************************ 00:07:12.713 START TEST unittest_accel 00:07:12.713 ************************************ 00:07:12.713 16:46:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:12.713 00:07:12.713 00:07:12.713 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.713 http://cunit.sourceforge.net/ 00:07:12.713 00:07:12.713 00:07:12.713 Suite: accel_sequence 00:07:12.713 Test: test_sequence_fill_copy ...passed 00:07:12.713 Test: test_sequence_abort ...passed 00:07:12.713 Test: test_sequence_append_error ...passed 00:07:12.713 Test: test_sequence_completion_error ...[2024-11-05 16:46:01.540168] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fcdb1ac27c0 00:07:12.713 [2024-11-05 16:46:01.540511] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7fcdb1ac27c0 00:07:12.713 [2024-11-05 16:46:01.540574] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7fcdb1ac27c0 00:07:12.713 [2024-11-05 16:46:01.540648] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7fcdb1ac27c0 00:07:12.713 passed 00:07:12.713 Test: test_sequence_decompress ...passed 00:07:12.713 Test: test_sequence_reverse ...passed 00:07:12.713 Test: test_sequence_copy_elision ...passed 00:07:12.713 Test: test_sequence_accel_buffers ...passed 00:07:12.713 Test: test_sequence_memory_domain ...[2024-11-05 16:46:01.552439] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:07:12.713 passed 00:07:12.713 Test: test_sequence_module_memory_domain ...[2024-11-05 16:46:01.552633] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:07:12.713 passed 00:07:12.713 Test: test_sequence_crypto ...passed 00:07:12.713 Test: test_sequence_driver ...[2024-11-05 16:46:01.559621] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7fcdb0e9a7c0 using driver: ut 00:07:12.713 [2024-11-05 16:46:01.559749] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fcdb0e9a7c0 through driver: ut 00:07:12.713 passed 00:07:12.713 Test: test_sequence_same_iovs ...passed 00:07:12.713 Test: test_sequence_crc32 ...passed 00:07:12.713 Suite: accel 00:07:12.713 Test: test_spdk_accel_task_complete ...passed 00:07:12.713 Test: test_get_task ...passed 00:07:12.713 Test: test_spdk_accel_submit_copy ...passed 00:07:12.713 Test: test_spdk_accel_submit_dualcast ...[2024-11-05 16:46:01.564952] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:12.713 [2024-11-05 16:46:01.565027] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:12.713 passed 00:07:12.713 Test: test_spdk_accel_submit_compare ...passed 00:07:12.713 Test: test_spdk_accel_submit_fill ...passed 00:07:12.713 Test: test_spdk_accel_submit_crc32c ...passed 00:07:12.713 Test: test_spdk_accel_submit_crc32cv ...passed 00:07:12.713 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:07:12.713 Test: test_spdk_accel_submit_xor ...passed 00:07:12.713 Test: test_spdk_accel_module_find_by_name ...passed 00:07:12.713 Test: test_spdk_accel_module_register ...passed 00:07:12.713 00:07:12.713 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.713 suites 2 2 n/a 0 0 00:07:12.713 tests 26 26 26 0 0 00:07:12.713 asserts 831 831 831 0 n/a 00:07:12.713 00:07:12.713 Elapsed time = 0.036 seconds 00:07:12.713 00:07:12.713 real 0m0.077s 00:07:12.713 user 0m0.053s 00:07:12.713 sys 0m0.025s 00:07:12.713 16:46:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.713 16:46:01 -- common/autotest_common.sh@10 -- # set +x 00:07:12.713 ************************************ 00:07:12.713 END TEST unittest_accel 00:07:12.713 ************************************ 00:07:12.974 16:46:01 -- unit/unittest.sh@214 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:12.974 16:46:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.974 16:46:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.974 16:46:01 -- common/autotest_common.sh@10 -- # set +x 00:07:12.974 ************************************ 00:07:12.974 START TEST unittest_ioat 00:07:12.974 ************************************ 00:07:12.974 16:46:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:12.974 00:07:12.974 00:07:12.974 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.974 http://cunit.sourceforge.net/ 00:07:12.974 00:07:12.974 00:07:12.974 Suite: ioat 00:07:12.974 Test: ioat_state_check ...passed 00:07:12.974 00:07:12.974 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.974 suites 1 1 n/a 0 0 00:07:12.974 tests 1 1 1 0 0 00:07:12.974 asserts 32 32 32 0 n/a 00:07:12.974 00:07:12.974 Elapsed time = 0.000 seconds 00:07:12.974 00:07:12.974 real 0m0.029s 00:07:12.974 user 0m0.021s 00:07:12.974 sys 0m0.009s 00:07:12.974 16:46:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.974 16:46:01 -- common/autotest_common.sh@10 -- # set +x 00:07:12.974 ************************************ 00:07:12.974 END TEST unittest_ioat 00:07:12.974 ************************************ 00:07:12.974 16:46:01 -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:12.974 16:46:01 -- unit/unittest.sh@216 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:12.974 16:46:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.974 16:46:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.974 16:46:01 -- common/autotest_common.sh@10 -- # set +x 00:07:12.974 ************************************ 00:07:12.974 START TEST unittest_idxd_user 00:07:12.974 ************************************ 00:07:12.974 16:46:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:12.974 00:07:12.974 00:07:12.974 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.974 http://cunit.sourceforge.net/ 00:07:12.974 00:07:12.974 00:07:12.974 Suite: idxd_user 00:07:12.974 Test: test_idxd_wait_cmd ...[2024-11-05 16:46:01.736386] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:12.974 [2024-11-05 16:46:01.736648] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:07:12.974 passed 00:07:12.974 Test: test_idxd_reset_dev ...[2024-11-05 16:46:01.736785] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:12.974 passed 00:07:12.974 Test: test_idxd_group_config ...passed 00:07:12.974 Test: test_idxd_wq_config ...[2024-11-05 16:46:01.736832] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:07:12.974 passed 00:07:12.974 00:07:12.974 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.974 suites 1 1 n/a 0 0 00:07:12.974 tests 4 4 4 0 0 00:07:12.974 asserts 20 20 20 0 n/a 00:07:12.974 00:07:12.974 Elapsed time = 0.001 seconds 00:07:12.974 00:07:12.974 real 0m0.033s 00:07:12.974 user 0m0.013s 00:07:12.974 sys 0m0.021s 00:07:12.974 16:46:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.974 16:46:01 -- common/autotest_common.sh@10 -- # set +x 00:07:12.974 ************************************ 00:07:12.974 END TEST unittest_idxd_user 00:07:12.974 ************************************ 00:07:12.974 16:46:01 -- unit/unittest.sh@218 -- # run_test unittest_iscsi unittest_iscsi 00:07:12.974 16:46:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.974 16:46:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.974 16:46:01 -- common/autotest_common.sh@10 -- # set +x 00:07:12.974 ************************************ 00:07:12.974 START TEST unittest_iscsi 00:07:12.974 ************************************ 00:07:12.974 16:46:01 -- common/autotest_common.sh@1114 -- # unittest_iscsi 00:07:12.974 16:46:01 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:07:12.974 00:07:12.974 00:07:12.974 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.974 http://cunit.sourceforge.net/ 00:07:12.974 00:07:12.974 00:07:12.974 Suite: conn_suite 00:07:12.974 Test: read_task_split_in_order_case ...passed 00:07:12.974 Test: read_task_split_reverse_order_case ...passed 00:07:12.974 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:07:12.974 Test: process_non_read_task_completion_test ...passed 00:07:12.974 Test: free_tasks_on_connection ...passed 00:07:12.974 Test: free_tasks_with_queued_datain ...passed 00:07:12.974 Test: abort_queued_datain_task_test ...passed 00:07:12.974 Test: abort_queued_datain_tasks_test ...passed 00:07:12.974 00:07:12.974 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.974 suites 1 1 n/a 0 0 00:07:12.974 tests 8 8 8 0 0 00:07:12.974 asserts 230 230 230 0 n/a 00:07:12.974 00:07:12.974 Elapsed time = 0.000 seconds 00:07:12.974 16:46:01 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:07:12.974 00:07:12.974 00:07:12.974 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.974 http://cunit.sourceforge.net/ 00:07:12.974 00:07:12.974 00:07:12.974 Suite: iscsi_suite 00:07:12.974 Test: param_negotiation_test ...passed 00:07:12.974 Test: list_negotiation_test ...passed 00:07:12.974 Test: parse_valid_test ...passed 00:07:12.974 Test: parse_invalid_test ...[2024-11-05 16:46:01.858784] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:07:12.974 [2024-11-05 16:46:01.859082] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:07:12.974 [2024-11-05 16:46:01.859141] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:07:12.974 [2024-11-05 16:46:01.859219] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:07:12.974 [2024-11-05 16:46:01.859381] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:07:12.974 [2024-11-05 16:46:01.859485] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:07:12.974 passed 00:07:12.974 00:07:12.974 [2024-11-05 16:46:01.859621] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:07:12.974 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.974 suites 1 1 n/a 0 0 00:07:12.974 tests 4 4 4 0 0 00:07:12.974 asserts 161 161 161 0 n/a 00:07:12.974 00:07:12.974 Elapsed time = 0.006 seconds 00:07:13.234 16:46:01 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:07:13.234 00:07:13.234 00:07:13.234 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.234 http://cunit.sourceforge.net/ 00:07:13.234 00:07:13.234 00:07:13.234 Suite: iscsi_target_node_suite 00:07:13.234 Test: add_lun_test_cases ...[2024-11-05 16:46:01.892179] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:07:13.234 [2024-11-05 16:46:01.892502] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:07:13.234 [2024-11-05 16:46:01.892624] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:13.234 [2024-11-05 16:46:01.892673] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:13.234 passed 00:07:13.234 Test: allow_any_allowed ...passed 00:07:13.234 Test: allow_ipv6_allowed ...passed 00:07:13.234 Test: allow_ipv6_denied ...passed 00:07:13.234 Test: allow_ipv6_invalid ...passed 00:07:13.234 Test: allow_ipv4_allowed ...passed 00:07:13.234 Test: allow_ipv4_denied ...passed 00:07:13.234 Test: allow_ipv4_invalid ...passed 00:07:13.234 Test: node_access_allowed ...[2024-11-05 16:46:01.892712] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:07:13.234 passed 00:07:13.234 Test: node_access_denied_by_empty_netmask ...passed 00:07:13.234 Test: node_access_multi_initiator_groups_cases ...passed 00:07:13.234 Test: allow_iscsi_name_multi_maps_case ...passed 00:07:13.234 Test: chap_param_test_cases ...[2024-11-05 16:46:01.893180] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:07:13.234 [2024-11-05 16:46:01.893232] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:07:13.234 [2024-11-05 16:46:01.893305] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:07:13.234 [2024-11-05 16:46:01.893346] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:07:13.234 [2024-11-05 16:46:01.893387] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:07:13.234 passed 00:07:13.234 00:07:13.234 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.234 suites 1 1 n/a 0 0 00:07:13.234 tests 13 13 13 0 0 00:07:13.234 asserts 50 50 50 0 n/a 00:07:13.234 00:07:13.234 Elapsed time = 0.001 seconds 00:07:13.234 16:46:01 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:07:13.234 00:07:13.234 00:07:13.234 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.234 http://cunit.sourceforge.net/ 00:07:13.234 00:07:13.234 00:07:13.234 Suite: iscsi_suite 00:07:13.234 Test: op_login_check_target_test ...[2024-11-05 16:46:01.926079] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:07:13.234 passed 00:07:13.234 Test: op_login_session_normal_test ...[2024-11-05 16:46:01.926446] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:13.234 [2024-11-05 16:46:01.926510] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:13.234 [2024-11-05 16:46:01.926589] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:13.234 [2024-11-05 16:46:01.926661] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:07:13.234 [2024-11-05 16:46:01.926803] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:13.234 [2024-11-05 16:46:01.926970] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:07:13.234 passed[2024-11-05 16:46:01.927155] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:13.234 00:07:13.234 Test: maxburstlength_test ...[2024-11-05 16:46:01.927735] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:13.234 [2024-11-05 16:46:01.927971] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:07:13.234 passed 00:07:13.234 Test: underflow_for_read_transfer_test ...passed 00:07:13.234 Test: underflow_for_zero_read_transfer_test ...passed 00:07:13.234 Test: underflow_for_request_sense_test ...passed 00:07:13.234 Test: underflow_for_check_condition_test ...passed 00:07:13.234 Test: add_transfer_task_test ...passed 00:07:13.234 Test: get_transfer_task_test ...passed 00:07:13.234 Test: del_transfer_task_test ...passed 00:07:13.234 Test: clear_all_transfer_tasks_test ...passed 00:07:13.234 Test: build_iovs_test ...passed 00:07:13.234 Test: build_iovs_with_md_test ...passed 00:07:13.235 Test: pdu_hdr_op_login_test ...[2024-11-05 16:46:01.932342] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:07:13.235 [2024-11-05 16:46:01.932632] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:07:13.235 [2024-11-05 16:46:01.932863] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:07:13.235 passed 00:07:13.235 Test: pdu_hdr_op_text_test ...[2024-11-05 16:46:01.933356] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:13.235 [2024-11-05 16:46:01.933612] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:07:13.235 [2024-11-05 16:46:01.933816] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:07:13.235 passed 00:07:13.235 Test: pdu_hdr_op_logout_test ...[2024-11-05 16:46:01.934285] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:07:13.235 passed 00:07:13.235 Test: pdu_hdr_op_scsi_test ...[2024-11-05 16:46:01.934862] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:13.235 [2024-11-05 16:46:01.935084] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:13.235 [2024-11-05 16:46:01.935300] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:07:13.235 [2024-11-05 16:46:01.935609] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:13.235 [2024-11-05 16:46:01.935871] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:07:13.235 [2024-11-05 16:46:01.936206] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:07:13.235 passed 00:07:13.235 Test: pdu_hdr_op_task_mgmt_test ...[2024-11-05 16:46:01.936683] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:07:13.235 [2024-11-05 16:46:01.936927] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:07:13.235 passed 00:07:13.235 Test: pdu_hdr_op_nopout_test ...[2024-11-05 16:46:01.937551] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:07:13.235 [2024-11-05 16:46:01.937803] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:13.235 [2024-11-05 16:46:01.938005] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:13.235 [2024-11-05 16:46:01.938207] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:07:13.235 passed 00:07:13.235 Test: pdu_hdr_op_data_test ...[2024-11-05 16:46:01.938599] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:07:13.235 [2024-11-05 16:46:01.938841] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:13.235 [2024-11-05 16:46:01.939161] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:13.235 [2024-11-05 16:46:01.939399] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:07:13.235 [2024-11-05 16:46:01.939660] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:07:13.235 [2024-11-05 16:46:01.939918] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:07:13.235 [2024-11-05 16:46:01.940126] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:07:13.235 passed 00:07:13.235 Test: empty_text_with_cbit_test ...passed 00:07:13.235 Test: pdu_payload_read_test ...[2024-11-05 16:46:01.943037] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:07:13.235 passed 00:07:13.235 Test: data_out_pdu_sequence_test ...passed 00:07:13.235 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:07:13.235 00:07:13.235 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.235 suites 1 1 n/a 0 0 00:07:13.235 tests 24 24 24 0 0 00:07:13.235 asserts 150253 150253 150253 0 n/a 00:07:13.235 00:07:13.235 Elapsed time = 0.020 seconds 00:07:13.235 16:46:01 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:07:13.235 00:07:13.235 00:07:13.235 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.235 http://cunit.sourceforge.net/ 00:07:13.235 00:07:13.235 00:07:13.235 Suite: init_grp_suite 00:07:13.235 Test: create_initiator_group_success_case ...passed 00:07:13.235 Test: find_initiator_group_success_case ...passed 00:07:13.235 Test: register_initiator_group_twice_case ...passed 00:07:13.235 Test: add_initiator_name_success_case ...passed 00:07:13.235 Test: add_initiator_name_fail_case ...[2024-11-05 16:46:01.988140] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:07:13.235 passed 00:07:13.235 Test: delete_all_initiator_names_success_case ...passed 00:07:13.235 Test: add_netmask_success_case ...passed 00:07:13.235 Test: add_netmask_fail_case ...[2024-11-05 16:46:01.989137] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:07:13.235 passed 00:07:13.235 Test: delete_all_netmasks_success_case ...passed 00:07:13.235 Test: initiator_name_overwrite_all_to_any_case ...passed 00:07:13.235 Test: netmask_overwrite_all_to_any_case ...passed 00:07:13.235 Test: add_delete_initiator_names_case ...passed 00:07:13.235 Test: add_duplicated_initiator_names_case ...passed 00:07:13.235 Test: delete_nonexisting_initiator_names_case ...passed 00:07:13.235 Test: add_delete_netmasks_case ...passed 00:07:13.235 Test: add_duplicated_netmasks_case ...passed 00:07:13.235 Test: delete_nonexisting_netmasks_case ...passed 00:07:13.235 00:07:13.235 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.235 suites 1 1 n/a 0 0 00:07:13.235 tests 17 17 17 0 0 00:07:13.235 asserts 108 108 108 0 n/a 00:07:13.235 00:07:13.235 Elapsed time = 0.001 seconds 00:07:13.235 16:46:02 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:07:13.235 00:07:13.235 00:07:13.235 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.235 http://cunit.sourceforge.net/ 00:07:13.235 00:07:13.235 00:07:13.235 Suite: portal_grp_suite 00:07:13.235 Test: portal_create_ipv4_normal_case ...passed 00:07:13.235 Test: portal_create_ipv6_normal_case ...passed 00:07:13.235 Test: portal_create_ipv4_wildcard_case ...passed 00:07:13.235 Test: portal_create_ipv6_wildcard_case ...passed 00:07:13.235 Test: portal_create_twice_case ...[2024-11-05 16:46:02.021690] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:07:13.235 passed 00:07:13.235 Test: portal_grp_register_unregister_case ...passed 00:07:13.235 Test: portal_grp_register_twice_case ...passed 00:07:13.235 Test: portal_grp_add_delete_case ...passed 00:07:13.235 Test: portal_grp_add_delete_twice_case ...passed 00:07:13.235 00:07:13.235 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.235 suites 1 1 n/a 0 0 00:07:13.235 tests 9 9 9 0 0 00:07:13.235 asserts 44 44 44 0 n/a 00:07:13.235 00:07:13.235 Elapsed time = 0.004 seconds 00:07:13.235 00:07:13.235 real 0m0.238s 00:07:13.235 user 0m0.144s 00:07:13.235 sys 0m0.082s 00:07:13.235 16:46:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.235 16:46:02 -- common/autotest_common.sh@10 -- # set +x 00:07:13.235 ************************************ 00:07:13.235 END TEST unittest_iscsi 00:07:13.235 ************************************ 00:07:13.235 16:46:02 -- unit/unittest.sh@219 -- # run_test unittest_json unittest_json 00:07:13.235 16:46:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.235 16:46:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.235 16:46:02 -- common/autotest_common.sh@10 -- # set +x 00:07:13.235 ************************************ 00:07:13.235 START TEST unittest_json 00:07:13.235 ************************************ 00:07:13.235 16:46:02 -- common/autotest_common.sh@1114 -- # unittest_json 00:07:13.235 16:46:02 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:07:13.235 00:07:13.235 00:07:13.235 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.235 http://cunit.sourceforge.net/ 00:07:13.235 00:07:13.235 00:07:13.235 Suite: json 00:07:13.235 Test: test_parse_literal ...passed 00:07:13.235 Test: test_parse_string_simple ...passed 00:07:13.235 Test: test_parse_string_control_chars ...passed 00:07:13.235 Test: test_parse_string_utf8 ...passed 00:07:13.235 Test: test_parse_string_escapes_twochar ...passed 00:07:13.235 Test: test_parse_string_escapes_unicode ...passed 00:07:13.235 Test: test_parse_number ...passed 00:07:13.235 Test: test_parse_array ...passed 00:07:13.235 Test: test_parse_object ...passed 00:07:13.235 Test: test_parse_nesting ...passed 00:07:13.495 Test: test_parse_comment ...passed 00:07:13.495 00:07:13.495 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.495 suites 1 1 n/a 0 0 00:07:13.495 tests 11 11 11 0 0 00:07:13.495 asserts 1516 1516 1516 0 n/a 00:07:13.495 00:07:13.495 Elapsed time = 0.002 seconds 00:07:13.495 16:46:02 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:07:13.495 00:07:13.495 00:07:13.495 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.495 http://cunit.sourceforge.net/ 00:07:13.495 00:07:13.495 00:07:13.495 Suite: json 00:07:13.495 Test: test_strequal ...passed 00:07:13.495 Test: test_num_to_uint16 ...passed 00:07:13.495 Test: test_num_to_int32 ...passed 00:07:13.495 Test: test_num_to_uint64 ...passed 00:07:13.495 Test: test_decode_object ...passed 00:07:13.495 Test: test_decode_array ...passed 00:07:13.495 Test: test_decode_bool ...passed 00:07:13.495 Test: test_decode_uint16 ...passed 00:07:13.495 Test: test_decode_int32 ...passed 00:07:13.495 Test: test_decode_uint32 ...passed 00:07:13.495 Test: test_decode_uint64 ...passed 00:07:13.495 Test: test_decode_string ...passed 00:07:13.495 Test: test_decode_uuid ...passed 00:07:13.495 Test: test_find ...passed 00:07:13.495 Test: test_find_array ...passed 00:07:13.495 Test: test_iterating ...passed 00:07:13.495 Test: test_free_object ...passed 00:07:13.495 00:07:13.495 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.495 suites 1 1 n/a 0 0 00:07:13.495 tests 17 17 17 0 0 00:07:13.495 asserts 236 236 236 0 n/a 00:07:13.495 00:07:13.495 Elapsed time = 0.001 seconds 00:07:13.495 16:46:02 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:07:13.495 00:07:13.495 00:07:13.495 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.495 http://cunit.sourceforge.net/ 00:07:13.495 00:07:13.495 00:07:13.495 Suite: json 00:07:13.495 Test: test_write_literal ...passed 00:07:13.495 Test: test_write_string_simple ...passed 00:07:13.495 Test: test_write_string_escapes ...passed 00:07:13.495 Test: test_write_string_utf16le ...passed 00:07:13.495 Test: test_write_number_int32 ...passed 00:07:13.495 Test: test_write_number_uint32 ...passed 00:07:13.495 Test: test_write_number_uint128 ...passed 00:07:13.495 Test: test_write_string_number_uint128 ...passed 00:07:13.495 Test: test_write_number_int64 ...passed 00:07:13.495 Test: test_write_number_uint64 ...passed 00:07:13.495 Test: test_write_number_double ...passed 00:07:13.495 Test: test_write_uuid ...passed 00:07:13.495 Test: test_write_array ...passed 00:07:13.495 Test: test_write_object ...passed 00:07:13.495 Test: test_write_nesting ...passed 00:07:13.495 Test: test_write_val ...passed 00:07:13.495 00:07:13.495 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.495 suites 1 1 n/a 0 0 00:07:13.495 tests 16 16 16 0 0 00:07:13.495 asserts 918 918 918 0 n/a 00:07:13.495 00:07:13.495 Elapsed time = 0.005 seconds 00:07:13.495 16:46:02 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:07:13.495 00:07:13.495 00:07:13.495 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.495 http://cunit.sourceforge.net/ 00:07:13.495 00:07:13.495 00:07:13.495 Suite: jsonrpc 00:07:13.495 Test: test_parse_request ...passed 00:07:13.495 Test: test_parse_request_streaming ...passed 00:07:13.495 00:07:13.495 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.495 suites 1 1 n/a 0 0 00:07:13.495 tests 2 2 2 0 0 00:07:13.495 asserts 289 289 289 0 n/a 00:07:13.495 00:07:13.495 Elapsed time = 0.004 seconds 00:07:13.495 00:07:13.495 real 0m0.137s 00:07:13.495 user 0m0.089s 00:07:13.495 sys 0m0.037s 00:07:13.495 16:46:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.495 16:46:02 -- common/autotest_common.sh@10 -- # set +x 00:07:13.495 ************************************ 00:07:13.495 END TEST unittest_json 00:07:13.495 ************************************ 00:07:13.495 16:46:02 -- unit/unittest.sh@220 -- # run_test unittest_rpc unittest_rpc 00:07:13.495 16:46:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.495 16:46:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.495 16:46:02 -- common/autotest_common.sh@10 -- # set +x 00:07:13.495 ************************************ 00:07:13.495 START TEST unittest_rpc 00:07:13.495 ************************************ 00:07:13.495 16:46:02 -- common/autotest_common.sh@1114 -- # unittest_rpc 00:07:13.495 16:46:02 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:07:13.495 00:07:13.495 00:07:13.495 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.495 http://cunit.sourceforge.net/ 00:07:13.495 00:07:13.495 00:07:13.495 Suite: rpc 00:07:13.495 Test: test_jsonrpc_handler ...passed 00:07:13.495 Test: test_spdk_rpc_is_method_allowed ...passed 00:07:13.495 Test: test_rpc_get_methods ...[2024-11-05 16:46:02.299974] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:07:13.495 passed 00:07:13.495 Test: test_rpc_spdk_get_version ...passed 00:07:13.495 Test: test_spdk_rpc_listen_close ...passed 00:07:13.495 00:07:13.495 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.495 suites 1 1 n/a 0 0 00:07:13.495 tests 5 5 5 0 0 00:07:13.495 asserts 20 20 20 0 n/a 00:07:13.495 00:07:13.495 Elapsed time = 0.001 seconds 00:07:13.495 00:07:13.495 real 0m0.028s 00:07:13.495 user 0m0.015s 00:07:13.495 sys 0m0.013s 00:07:13.495 16:46:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.495 16:46:02 -- common/autotest_common.sh@10 -- # set +x 00:07:13.495 ************************************ 00:07:13.495 END TEST unittest_rpc 00:07:13.495 ************************************ 00:07:13.495 16:46:02 -- unit/unittest.sh@221 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:13.495 16:46:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.496 16:46:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.496 16:46:02 -- common/autotest_common.sh@10 -- # set +x 00:07:13.496 ************************************ 00:07:13.496 START TEST unittest_notify 00:07:13.496 ************************************ 00:07:13.496 16:46:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:13.496 00:07:13.496 00:07:13.496 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.496 http://cunit.sourceforge.net/ 00:07:13.496 00:07:13.496 00:07:13.496 Suite: app_suite 00:07:13.496 Test: notify ...passed 00:07:13.496 00:07:13.496 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.496 suites 1 1 n/a 0 0 00:07:13.496 tests 1 1 1 0 0 00:07:13.496 asserts 13 13 13 0 n/a 00:07:13.496 00:07:13.496 Elapsed time = 0.000 seconds 00:07:13.755 00:07:13.755 real 0m0.031s 00:07:13.755 user 0m0.012s 00:07:13.755 sys 0m0.019s 00:07:13.755 16:46:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.755 16:46:02 -- common/autotest_common.sh@10 -- # set +x 00:07:13.755 ************************************ 00:07:13.755 END TEST unittest_notify 00:07:13.755 ************************************ 00:07:13.755 16:46:02 -- unit/unittest.sh@222 -- # run_test unittest_nvme unittest_nvme 00:07:13.755 16:46:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.755 16:46:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.756 16:46:02 -- common/autotest_common.sh@10 -- # set +x 00:07:13.756 ************************************ 00:07:13.756 START TEST unittest_nvme 00:07:13.756 ************************************ 00:07:13.756 16:46:02 -- common/autotest_common.sh@1114 -- # unittest_nvme 00:07:13.756 16:46:02 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:07:13.756 00:07:13.756 00:07:13.756 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.756 http://cunit.sourceforge.net/ 00:07:13.756 00:07:13.756 00:07:13.756 Suite: nvme 00:07:13.756 Test: test_opc_data_transfer ...passed 00:07:13.756 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:07:13.756 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:07:13.756 Test: test_trid_parse_and_compare ...[2024-11-05 16:46:02.461570] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:07:13.756 [2024-11-05 16:46:02.462073] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:13.756 [2024-11-05 16:46:02.462325] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:07:13.756 [2024-11-05 16:46:02.462490] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:13.756 [2024-11-05 16:46:02.462640] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:07:13.756 [2024-11-05 16:46:02.462885] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:13.756 passed 00:07:13.756 Test: test_trid_trtype_str ...passed 00:07:13.756 Test: test_trid_adrfam_str ...passed 00:07:13.756 Test: test_nvme_ctrlr_probe ...[2024-11-05 16:46:02.463720] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:13.756 passed 00:07:13.756 Test: test_spdk_nvme_probe ...[2024-11-05 16:46:02.464217] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:13.756 [2024-11-05 16:46:02.464387] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:13.756 [2024-11-05 16:46:02.464624] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:07:13.756 [2024-11-05 16:46:02.464797] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:13.756 passed 00:07:13.756 Test: test_spdk_nvme_connect ...[2024-11-05 16:46:02.465185] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:07:13.756 [2024-11-05 16:46:02.465673] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:13.756 [2024-11-05 16:46:02.465880] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:07:13.756 passed 00:07:13.756 Test: test_nvme_ctrlr_probe_internal ...[2024-11-05 16:46:02.466319] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:13.756 [2024-11-05 16:46:02.466490] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:07:13.756 passed 00:07:13.756 Test: test_nvme_init_controllers ...[2024-11-05 16:46:02.466906] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:07:13.756 passed 00:07:13.756 Test: test_nvme_driver_init ...[2024-11-05 16:46:02.467366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:07:13.756 [2024-11-05 16:46:02.467551] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:13.756 [2024-11-05 16:46:02.581901] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:07:13.756 [2024-11-05 16:46:02.582278] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:07:13.756 passed 00:07:13.756 Test: test_spdk_nvme_detach ...passed 00:07:13.756 Test: test_nvme_completion_poll_cb ...passed 00:07:13.756 Test: test_nvme_user_copy_cmd_complete ...passed 00:07:13.756 Test: test_nvme_allocate_request_null ...passed 00:07:13.756 Test: test_nvme_allocate_request ...passed 00:07:13.756 Test: test_nvme_free_request ...passed 00:07:13.756 Test: test_nvme_allocate_request_user_copy ...passed 00:07:13.756 Test: test_nvme_robust_mutex_init_shared ...passed 00:07:13.756 Test: test_nvme_request_check_timeout ...passed 00:07:13.756 Test: test_nvme_wait_for_completion ...passed 00:07:13.756 Test: test_spdk_nvme_parse_func ...passed 00:07:13.756 Test: test_spdk_nvme_detach_async ...passed 00:07:13.756 Test: test_nvme_parse_addr ...[2024-11-05 16:46:02.586760] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:07:13.756 passed 00:07:13.756 00:07:13.756 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.756 suites 1 1 n/a 0 0 00:07:13.756 tests 25 25 25 0 0 00:07:13.756 asserts 326 326 326 0 n/a 00:07:13.756 00:07:13.756 Elapsed time = 0.008 seconds 00:07:13.756 16:46:02 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:07:13.756 00:07:13.756 00:07:13.756 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.756 http://cunit.sourceforge.net/ 00:07:13.756 00:07:13.756 00:07:13.756 Suite: nvme_ctrlr 00:07:13.756 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-11-05 16:46:02.617793] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:13.756 passed 00:07:13.756 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-11-05 16:46:02.619951] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:13.756 passed 00:07:13.756 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-11-05 16:46:02.621673] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:13.756 passed 00:07:13.756 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-11-05 16:46:02.623267] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:13.756 passed 00:07:13.756 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-11-05 16:46:02.625063] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:13.756 [2024-11-05 16:46:02.626467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-05 16:46:02.627925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-05 16:46:02.629297] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:13.756 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-11-05 16:46:02.632273] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:13.756 [2024-11-05 16:46:02.634731] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-05 16:46:02.636291] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:13.756 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-11-05 16:46:02.639288] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.015 [2024-11-05 16:46:02.640759] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-05 16:46:02.643400] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:14.015 Test: test_nvme_ctrlr_init_delay ...[2024-11-05 16:46:02.646560] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.015 passed 00:07:14.015 Test: test_alloc_io_qpair_rr_1 ...[2024-11-05 16:46:02.648457] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.015 [2024-11-05 16:46:02.648846] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:14.015 [2024-11-05 16:46:02.649310] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:14.015 [2024-11-05 16:46:02.649600] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:14.015 [2024-11-05 16:46:02.649864] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:14.015 passed 00:07:14.015 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:07:14.015 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:07:14.015 Test: test_alloc_io_qpair_wrr_1 ...[2024-11-05 16:46:02.650970] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.015 passed 00:07:14.015 Test: test_alloc_io_qpair_wrr_2 ...[2024-11-05 16:46:02.651734] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.015 [2024-11-05 16:46:02.652096] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:14.015 passed 00:07:14.015 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-11-05 16:46:02.652854] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:07:14.015 [2024-11-05 16:46:02.653200] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:14.015 [2024-11-05 16:46:02.653503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:07:14.015 [2024-11-05 16:46:02.653763] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:14.015 passed 00:07:14.015 Test: test_nvme_ctrlr_fail ...[2024-11-05 16:46:02.654267] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:07:14.015 passed 00:07:14.015 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:07:14.015 Test: test_nvme_ctrlr_set_supported_features ...passed 00:07:14.015 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:07:14.015 Test: test_nvme_ctrlr_test_active_ns ...[2024-11-05 16:46:02.655933] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.274 passed 00:07:14.274 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:07:14.274 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:07:14.274 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:07:14.275 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-11-05 16:46:02.984234] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.275 passed 00:07:14.275 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-11-05 16:46:02.992160] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.275 passed 00:07:14.275 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-11-05 16:46:02.993889] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.275 [2024-11-05 16:46:02.994140] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:07:14.275 passed 00:07:14.275 Test: test_alloc_io_qpair_fail ...[2024-11-05 16:46:02.995817] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.275 [2024-11-05 16:46:02.996073] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:07:14.275 passed 00:07:14.275 Test: test_nvme_ctrlr_add_remove_process ...passed 00:07:14.275 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:07:14.275 Test: test_nvme_ctrlr_set_state ...[2024-11-05 16:46:02.996799] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:07:14.275 passed 00:07:14.275 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-11-05 16:46:02.997145] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.275 passed 00:07:14.275 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-11-05 16:46:03.019051] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.275 passed 00:07:14.275 Test: test_nvme_ctrlr_ns_mgmt ...[2024-11-05 16:46:03.063482] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.275 passed 00:07:14.275 Test: test_nvme_ctrlr_reset ...[2024-11-05 16:46:03.065466] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.275 passed 00:07:14.275 Test: test_nvme_ctrlr_aer_callback ...[2024-11-05 16:46:03.066228] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.275 passed 00:07:14.275 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-11-05 16:46:03.068086] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.275 passed 00:07:14.275 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:07:14.275 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:07:14.275 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-11-05 16:46:03.070769] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.275 passed 00:07:14.275 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:07:14.275 Test: test_nvme_ctrlr_ana_resize ...[2024-11-05 16:46:03.072922] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.275 passed 00:07:14.275 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:07:14.275 Test: test_nvme_transport_ctrlr_ready ...[2024-11-05 16:46:03.075108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:07:14.275 [2024-11-05 16:46:03.075312] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:07:14.275 passed 00:07:14.275 Test: test_nvme_ctrlr_disable ...[2024-11-05 16:46:03.075680] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.275 passed 00:07:14.275 00:07:14.275 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.275 suites 1 1 n/a 0 0 00:07:14.275 tests 43 43 43 0 0 00:07:14.275 asserts 10418 10418 10418 0 n/a 00:07:14.275 00:07:14.275 Elapsed time = 0.402 seconds 00:07:14.275 16:46:03 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:07:14.275 00:07:14.275 00:07:14.275 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.275 http://cunit.sourceforge.net/ 00:07:14.275 00:07:14.275 00:07:14.275 Suite: nvme_ctrlr_cmd 00:07:14.275 Test: test_get_log_pages ...passed 00:07:14.275 Test: test_set_feature_cmd ...passed 00:07:14.275 Test: test_set_feature_ns_cmd ...passed 00:07:14.275 Test: test_get_feature_cmd ...passed 00:07:14.275 Test: test_get_feature_ns_cmd ...passed 00:07:14.275 Test: test_abort_cmd ...passed 00:07:14.275 Test: test_set_host_id_cmds ...[2024-11-05 16:46:03.122675] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:07:14.275 passed 00:07:14.275 Test: test_io_cmd_raw_no_payload_build ...passed 00:07:14.275 Test: test_io_raw_cmd ...passed 00:07:14.275 Test: test_io_raw_cmd_with_md ...passed 00:07:14.275 Test: test_namespace_attach ...passed 00:07:14.275 Test: test_namespace_detach ...passed 00:07:14.275 Test: test_namespace_create ...passed 00:07:14.275 Test: test_namespace_delete ...passed 00:07:14.275 Test: test_doorbell_buffer_config ...passed 00:07:14.275 Test: test_format_nvme ...passed 00:07:14.275 Test: test_fw_commit ...passed 00:07:14.275 Test: test_fw_image_download ...passed 00:07:14.275 Test: test_sanitize ...passed 00:07:14.275 Test: test_directive ...passed 00:07:14.275 Test: test_nvme_request_add_abort ...passed 00:07:14.275 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:07:14.275 Test: test_nvme_ctrlr_cmd_identify ...passed 00:07:14.275 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:07:14.275 00:07:14.275 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.275 suites 1 1 n/a 0 0 00:07:14.275 tests 24 24 24 0 0 00:07:14.275 asserts 198 198 198 0 n/a 00:07:14.275 00:07:14.275 Elapsed time = 0.001 seconds 00:07:14.275 16:46:03 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:07:14.275 00:07:14.275 00:07:14.275 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.275 http://cunit.sourceforge.net/ 00:07:14.275 00:07:14.275 00:07:14.275 Suite: nvme_ctrlr_cmd 00:07:14.275 Test: test_geometry_cmd ...passed 00:07:14.275 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:07:14.275 00:07:14.275 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.275 suites 1 1 n/a 0 0 00:07:14.275 tests 2 2 2 0 0 00:07:14.275 asserts 7 7 7 0 n/a 00:07:14.275 00:07:14.275 Elapsed time = 0.000 seconds 00:07:14.535 16:46:03 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:07:14.535 00:07:14.535 00:07:14.535 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.535 http://cunit.sourceforge.net/ 00:07:14.535 00:07:14.535 00:07:14.535 Suite: nvme 00:07:14.535 Test: test_nvme_ns_construct ...passed 00:07:14.535 Test: test_nvme_ns_uuid ...passed 00:07:14.535 Test: test_nvme_ns_csi ...passed 00:07:14.535 Test: test_nvme_ns_data ...passed 00:07:14.535 Test: test_nvme_ns_set_identify_data ...passed 00:07:14.535 Test: test_spdk_nvme_ns_get_values ...passed 00:07:14.535 Test: test_spdk_nvme_ns_is_active ...passed 00:07:14.535 Test: spdk_nvme_ns_supports ...passed 00:07:14.535 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:07:14.535 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:07:14.535 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:07:14.535 Test: test_nvme_ns_find_id_desc ...passed 00:07:14.535 00:07:14.535 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.535 suites 1 1 n/a 0 0 00:07:14.535 tests 12 12 12 0 0 00:07:14.535 asserts 83 83 83 0 n/a 00:07:14.535 00:07:14.535 Elapsed time = 0.001 seconds 00:07:14.535 16:46:03 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:07:14.535 00:07:14.535 00:07:14.535 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.535 http://cunit.sourceforge.net/ 00:07:14.535 00:07:14.535 00:07:14.535 Suite: nvme_ns_cmd 00:07:14.535 Test: split_test ...passed 00:07:14.535 Test: split_test2 ...passed 00:07:14.535 Test: split_test3 ...passed 00:07:14.535 Test: split_test4 ...passed 00:07:14.535 Test: test_nvme_ns_cmd_flush ...passed 00:07:14.535 Test: test_nvme_ns_cmd_dataset_management ...passed 00:07:14.535 Test: test_nvme_ns_cmd_copy ...passed 00:07:14.535 Test: test_io_flags ...[2024-11-05 16:46:03.211745] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:07:14.535 passed 00:07:14.535 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:07:14.535 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:07:14.535 Test: test_nvme_ns_cmd_reservation_register ...passed 00:07:14.535 Test: test_nvme_ns_cmd_reservation_release ...passed 00:07:14.535 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:07:14.535 Test: test_nvme_ns_cmd_reservation_report ...passed 00:07:14.535 Test: test_cmd_child_request ...passed 00:07:14.535 Test: test_nvme_ns_cmd_readv ...passed 00:07:14.535 Test: test_nvme_ns_cmd_read_with_md ...passed 00:07:14.535 Test: test_nvme_ns_cmd_writev ...[2024-11-05 16:46:03.214656] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:07:14.535 passed 00:07:14.535 Test: test_nvme_ns_cmd_write_with_md ...passed 00:07:14.535 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:07:14.535 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:07:14.535 Test: test_nvme_ns_cmd_comparev ...passed 00:07:14.535 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:07:14.535 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:07:14.535 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:07:14.535 Test: test_nvme_ns_cmd_setup_request ...passed 00:07:14.535 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:07:14.535 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-11-05 16:46:03.218499] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:14.535 passed 00:07:14.535 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-11-05 16:46:03.218919] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:14.535 passed 00:07:14.535 Test: test_nvme_ns_cmd_verify ...passed 00:07:14.535 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:07:14.535 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:07:14.535 00:07:14.535 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.535 suites 1 1 n/a 0 0 00:07:14.535 tests 32 32 32 0 0 00:07:14.535 asserts 550 550 550 0 n/a 00:07:14.535 00:07:14.535 Elapsed time = 0.005 seconds 00:07:14.535 16:46:03 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:07:14.535 00:07:14.535 00:07:14.535 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.535 http://cunit.sourceforge.net/ 00:07:14.535 00:07:14.535 00:07:14.535 Suite: nvme_ns_cmd 00:07:14.535 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:07:14.535 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:07:14.535 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:07:14.535 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:07:14.535 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:07:14.535 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:07:14.535 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:07:14.535 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:07:14.535 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:07:14.535 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:07:14.535 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:07:14.535 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:07:14.535 00:07:14.535 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.535 suites 1 1 n/a 0 0 00:07:14.535 tests 12 12 12 0 0 00:07:14.535 asserts 123 123 123 0 n/a 00:07:14.535 00:07:14.535 Elapsed time = 0.001 seconds 00:07:14.536 16:46:03 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:07:14.536 00:07:14.536 00:07:14.536 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.536 http://cunit.sourceforge.net/ 00:07:14.536 00:07:14.536 00:07:14.536 Suite: nvme_qpair 00:07:14.536 Test: test3 ...passed 00:07:14.536 Test: test_ctrlr_failed ...passed 00:07:14.536 Test: struct_packing ...passed 00:07:14.536 Test: test_nvme_qpair_process_completions ...[2024-11-05 16:46:03.281423] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:14.536 [2024-11-05 16:46:03.281842] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:14.536 [2024-11-05 16:46:03.282029] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:07:14.536 [2024-11-05 16:46:03.282257] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:07:14.536 passed 00:07:14.536 Test: test_nvme_completion_is_retry ...passed 00:07:14.536 Test: test_get_status_string ...passed 00:07:14.536 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:07:14.536 Test: test_nvme_qpair_submit_request ...passed 00:07:14.536 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:07:14.536 Test: test_nvme_qpair_manual_complete_request ...passed 00:07:14.536 Test: test_nvme_qpair_init_deinit ...[2024-11-05 16:46:03.283816] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:14.536 passed 00:07:14.536 Test: test_nvme_get_sgl_print_info ...passed 00:07:14.536 00:07:14.536 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.536 suites 1 1 n/a 0 0 00:07:14.536 tests 12 12 12 0 0 00:07:14.536 asserts 154 154 154 0 n/a 00:07:14.536 00:07:14.536 Elapsed time = 0.002 seconds 00:07:14.536 16:46:03 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:07:14.536 00:07:14.536 00:07:14.536 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.536 http://cunit.sourceforge.net/ 00:07:14.536 00:07:14.536 00:07:14.536 Suite: nvme_pcie 00:07:14.536 Test: test_prp_list_append ...[2024-11-05 16:46:03.318503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:14.536 [2024-11-05 16:46:03.319388] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:07:14.536 [2024-11-05 16:46:03.319737] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:07:14.536 [2024-11-05 16:46:03.320323] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:14.536 [2024-11-05 16:46:03.320712] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:14.536 passed 00:07:14.536 Test: test_nvme_pcie_hotplug_monitor ...passed 00:07:14.536 Test: test_shadow_doorbell_update ...passed 00:07:14.536 Test: test_build_contig_hw_sgl_request ...passed 00:07:14.536 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:07:14.536 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:07:14.536 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:07:14.536 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-11-05 16:46:03.321942] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:14.536 passed 00:07:14.536 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:07:14.536 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:07:14.536 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-11-05 16:46:03.322954] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:07:14.536 passed 00:07:14.536 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-11-05 16:46:03.323516] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:07:14.536 passed 00:07:14.536 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-11-05 16:46:03.324024] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:07:14.536 passed 00:07:14.536 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-11-05 16:46:03.324505] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:07:14.536 passed 00:07:14.536 00:07:14.536 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.536 suites 1 1 n/a 0 0 00:07:14.536 tests 14 14 14 0 0 00:07:14.536 asserts 235 235 235 0 n/a 00:07:14.536 00:07:14.536 Elapsed time = 0.002 seconds 00:07:14.536 16:46:03 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:07:14.536 00:07:14.536 00:07:14.536 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.536 http://cunit.sourceforge.net/ 00:07:14.536 00:07:14.536 00:07:14.536 Suite: nvme_ns_cmd 00:07:14.536 Test: nvme_poll_group_create_test ...passed 00:07:14.536 Test: nvme_poll_group_add_remove_test ...passed 00:07:14.536 Test: nvme_poll_group_process_completions ...passed 00:07:14.536 Test: nvme_poll_group_destroy_test ...passed 00:07:14.536 Test: nvme_poll_group_get_free_stats ...passed 00:07:14.536 00:07:14.536 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.536 suites 1 1 n/a 0 0 00:07:14.536 tests 5 5 5 0 0 00:07:14.536 asserts 75 75 75 0 n/a 00:07:14.536 00:07:14.536 Elapsed time = 0.001 seconds 00:07:14.536 16:46:03 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:07:14.536 00:07:14.536 00:07:14.536 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.536 http://cunit.sourceforge.net/ 00:07:14.536 00:07:14.536 00:07:14.536 Suite: nvme_quirks 00:07:14.536 Test: test_nvme_quirks_striping ...passed 00:07:14.536 00:07:14.536 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.536 suites 1 1 n/a 0 0 00:07:14.536 tests 1 1 1 0 0 00:07:14.536 asserts 5 5 5 0 n/a 00:07:14.536 00:07:14.536 Elapsed time = 0.000 seconds 00:07:14.536 16:46:03 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:07:14.536 00:07:14.536 00:07:14.536 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.536 http://cunit.sourceforge.net/ 00:07:14.536 00:07:14.536 00:07:14.536 Suite: nvme_tcp 00:07:14.536 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:07:14.536 Test: test_nvme_tcp_build_iovs ...passed 00:07:14.536 Test: test_nvme_tcp_build_sgl_request ...[2024-11-05 16:46:03.408777] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffc45877380, and the iovcnt=16, remaining_size=28672 00:07:14.536 passed 00:07:14.536 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:07:14.536 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:07:14.536 Test: test_nvme_tcp_req_complete_safe ...passed 00:07:14.536 Test: test_nvme_tcp_req_get ...passed 00:07:14.536 Test: test_nvme_tcp_req_init ...passed 00:07:14.536 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:07:14.536 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:07:14.536 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-11-05 16:46:03.411300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc458790a0 is same with the state(6) to be set 00:07:14.536 passed 00:07:14.536 Test: test_nvme_tcp_alloc_reqs ...passed 00:07:14.536 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-11-05 16:46:03.412181] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc45878230 is same with the state(5) to be set 00:07:14.536 passed 00:07:14.536 Test: test_nvme_tcp_pdu_ch_handle ...[2024-11-05 16:46:03.412558] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffc45878d60 00:07:14.536 [2024-11-05 16:46:03.412731] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:07:14.536 [2024-11-05 16:46:03.412949] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc458786f0 is same with the state(5) to be set 00:07:14.536 [2024-11-05 16:46:03.413184] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:07:14.536 [2024-11-05 16:46:03.413405] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc458786f0 is same with the state(5) to be set 00:07:14.536 [2024-11-05 16:46:03.413579] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:07:14.536 [2024-11-05 16:46:03.413761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc458786f0 is same with the state(5) to be set 00:07:14.536 [2024-11-05 16:46:03.413942] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc458786f0 is same with the state(5) to be set 00:07:14.536 [2024-11-05 16:46:03.414115] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc458786f0 is same with the state(5) to be set 00:07:14.536 [2024-11-05 16:46:03.414331] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc458786f0 is same with the state(5) to be set 00:07:14.536 [2024-11-05 16:46:03.414499] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc458786f0 is same with the state(5) to be set 00:07:14.536 [2024-11-05 16:46:03.414659] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc458786f0 is same with the state(5) to be set 00:07:14.536 passed 00:07:14.536 Test: test_nvme_tcp_qpair_connect_sock ...[2024-11-05 16:46:03.415141] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:07:14.537 [2024-11-05 16:46:03.415356] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:14.537 [2024-11-05 16:46:03.415744] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:14.537 passed 00:07:14.537 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:07:14.537 Test: test_nvme_tcp_c2h_payload_handle ...[2024-11-05 16:46:03.416282] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffc458788a0): PDU Sequence Error 00:07:14.537 passed 00:07:14.537 Test: test_nvme_tcp_icresp_handle ...[2024-11-05 16:46:03.416690] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:07:14.537 [2024-11-05 16:46:03.416857] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:07:14.537 [2024-11-05 16:46:03.417020] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc45878240 is same with the state(5) to be set 00:07:14.537 [2024-11-05 16:46:03.417187] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:07:14.537 [2024-11-05 16:46:03.417343] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc45878240 is same with the state(5) to be set 00:07:14.537 [2024-11-05 16:46:03.417540] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc45878240 is same with the state(0) to be set 00:07:14.537 passed 00:07:14.537 Test: test_nvme_tcp_pdu_payload_handle ...[2024-11-05 16:46:03.417873] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffc45878d60): PDU Sequence Error 00:07:14.537 passed 00:07:14.537 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-11-05 16:46:03.418271] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffc45877520 00:07:14.537 passed 00:07:14.537 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:07:14.537 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-11-05 16:46:03.418927] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffc45876ba0, errno=0, rc=0 00:07:14.537 [2024-11-05 16:46:03.419119] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc45876ba0 is same with the state(5) to be set 00:07:14.537 [2024-11-05 16:46:03.419336] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc45876ba0 is same with the state(5) to be set 00:07:14.537 [2024-11-05 16:46:03.419549] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffc45876ba0 (0): Success 00:07:14.537 [2024-11-05 16:46:03.419709] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffc45876ba0 (0): Success 00:07:14.537 passed 00:07:14.795 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-11-05 16:46:03.534913] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:14.795 [2024-11-05 16:46:03.535290] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:14.795 passed 00:07:14.795 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:07:14.795 Test: test_nvme_tcp_poll_group_get_stats ...[2024-11-05 16:46:03.535702] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:14.795 [2024-11-05 16:46:03.535860] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:14.795 passed 00:07:14.795 Test: test_nvme_tcp_ctrlr_construct ...[2024-11-05 16:46:03.536383] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:14.796 [2024-11-05 16:46:03.536564] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:14.796 [2024-11-05 16:46:03.536787] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:07:14.796 [2024-11-05 16:46:03.536960] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:14.796 [2024-11-05 16:46:03.537181] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:07:14.796 [2024-11-05 16:46:03.537384] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:14.796 passed 00:07:14.796 Test: test_nvme_tcp_qpair_submit_request ...[2024-11-05 16:46:03.537830] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:07:14.796 [2024-11-05 16:46:03.538006] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:07:14.796 passed 00:07:14.796 00:07:14.796 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.796 suites 1 1 n/a 0 0 00:07:14.796 tests 27 27 27 0 0 00:07:14.796 asserts 624 624 624 0 n/a 00:07:14.796 00:07:14.796 Elapsed time = 0.122 seconds 00:07:14.796 16:46:03 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:07:14.796 00:07:14.796 00:07:14.796 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.796 http://cunit.sourceforge.net/ 00:07:14.796 00:07:14.796 00:07:14.796 Suite: nvme_transport 00:07:14.796 Test: test_nvme_get_transport ...passed 00:07:14.796 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:07:14.796 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:07:14.796 Test: test_nvme_transport_poll_group_add_remove ...passed 00:07:14.796 Test: test_ctrlr_get_memory_domains ...passed 00:07:14.796 00:07:14.796 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.796 suites 1 1 n/a 0 0 00:07:14.796 tests 5 5 5 0 0 00:07:14.796 asserts 28 28 28 0 n/a 00:07:14.796 00:07:14.796 Elapsed time = 0.000 seconds 00:07:14.796 16:46:03 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:07:14.796 00:07:14.796 00:07:14.796 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.796 http://cunit.sourceforge.net/ 00:07:14.796 00:07:14.796 00:07:14.796 Suite: nvme_io_msg 00:07:14.796 Test: test_nvme_io_msg_send ...passed 00:07:14.796 Test: test_nvme_io_msg_process ...passed 00:07:14.796 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:07:14.796 00:07:14.796 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.796 suites 1 1 n/a 0 0 00:07:14.796 tests 3 3 3 0 0 00:07:14.796 asserts 56 56 56 0 n/a 00:07:14.796 00:07:14.796 Elapsed time = 0.000 seconds 00:07:14.796 16:46:03 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:07:14.796 00:07:14.796 00:07:14.796 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.796 http://cunit.sourceforge.net/ 00:07:14.796 00:07:14.796 00:07:14.796 Suite: nvme_pcie_common 00:07:14.796 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-11-05 16:46:03.647692] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:07:14.796 passed 00:07:14.796 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:07:14.796 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:07:14.796 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-11-05 16:46:03.648999] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:07:14.796 [2024-11-05 16:46:03.649249] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:07:14.796 [2024-11-05 16:46:03.649421] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:07:14.796 passed 00:07:14.796 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:07:14.796 Test: test_nvme_pcie_poll_group_get_stats ...[2024-11-05 16:46:03.650287] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:14.796 [2024-11-05 16:46:03.650485] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:14.796 passed 00:07:14.796 00:07:14.796 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.796 suites 1 1 n/a 0 0 00:07:14.796 tests 6 6 6 0 0 00:07:14.796 asserts 148 148 148 0 n/a 00:07:14.796 00:07:14.796 Elapsed time = 0.002 seconds 00:07:14.796 16:46:03 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:07:14.796 00:07:14.796 00:07:14.796 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.796 http://cunit.sourceforge.net/ 00:07:14.796 00:07:14.796 00:07:14.796 Suite: nvme_fabric 00:07:14.796 Test: test_nvme_fabric_prop_set_cmd ...passed 00:07:14.796 Test: test_nvme_fabric_prop_get_cmd ...passed 00:07:14.796 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:07:14.796 Test: test_nvme_fabric_discover_probe ...passed 00:07:14.796 Test: test_nvme_fabric_qpair_connect ...[2024-11-05 16:46:03.680125] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:07:14.796 passed 00:07:14.796 00:07:14.796 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.796 suites 1 1 n/a 0 0 00:07:14.796 tests 5 5 5 0 0 00:07:14.796 asserts 60 60 60 0 n/a 00:07:14.796 00:07:14.796 Elapsed time = 0.001 seconds 00:07:15.055 16:46:03 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:07:15.055 00:07:15.055 00:07:15.055 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.055 http://cunit.sourceforge.net/ 00:07:15.055 00:07:15.055 00:07:15.055 Suite: nvme_opal 00:07:15.055 Test: test_opal_nvme_security_recv_send_done ...passed 00:07:15.055 Test: test_opal_add_short_atom_header ...[2024-11-05 16:46:03.713468] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:07:15.055 passed 00:07:15.055 00:07:15.055 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.055 suites 1 1 n/a 0 0 00:07:15.055 tests 2 2 2 0 0 00:07:15.055 asserts 22 22 22 0 n/a 00:07:15.055 00:07:15.055 Elapsed time = 0.001 seconds 00:07:15.055 00:07:15.055 real 0m1.285s 00:07:15.055 user 0m0.659s 00:07:15.055 sys 0m0.418s 00:07:15.055 16:46:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.055 16:46:03 -- common/autotest_common.sh@10 -- # set +x 00:07:15.055 ************************************ 00:07:15.055 END TEST unittest_nvme 00:07:15.055 ************************************ 00:07:15.055 16:46:03 -- unit/unittest.sh@223 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:15.055 16:46:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:15.055 16:46:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.055 16:46:03 -- common/autotest_common.sh@10 -- # set +x 00:07:15.055 ************************************ 00:07:15.055 START TEST unittest_log 00:07:15.055 ************************************ 00:07:15.055 16:46:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:15.055 00:07:15.055 00:07:15.055 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.055 http://cunit.sourceforge.net/ 00:07:15.055 00:07:15.055 00:07:15.055 Suite: log 00:07:15.055 Test: log_test ...[2024-11-05 16:46:03.801211] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:07:15.055 [2024-11-05 16:46:03.801607] log_ut.c: 55:log_test: *DEBUG*: log test 00:07:15.055 log dump test: 00:07:15.055 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:07:15.055 spdk dump test: 00:07:15.055 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:07:15.055 spdk dump test: 00:07:15.055 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:07:15.055 00000010 65 20 63 68 61 72 73 e chars 00:07:15.055 passed 00:07:15.989 Test: deprecation ...passed 00:07:15.989 00:07:15.989 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.989 suites 1 1 n/a 0 0 00:07:15.989 tests 2 2 2 0 0 00:07:15.989 asserts 73 73 73 0 n/a 00:07:15.989 00:07:15.989 Elapsed time = 0.001 seconds 00:07:15.989 00:07:15.989 real 0m1.036s 00:07:15.989 user 0m0.034s 00:07:15.989 sys 0m0.001s 00:07:15.989 16:46:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.989 ************************************ 00:07:15.989 END TEST unittest_log 00:07:15.989 ************************************ 00:07:15.989 16:46:04 -- common/autotest_common.sh@10 -- # set +x 00:07:15.989 16:46:04 -- unit/unittest.sh@224 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:15.989 16:46:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:15.989 16:46:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.989 16:46:04 -- common/autotest_common.sh@10 -- # set +x 00:07:16.248 ************************************ 00:07:16.248 START TEST unittest_lvol 00:07:16.248 ************************************ 00:07:16.248 16:46:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:16.248 00:07:16.248 00:07:16.248 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.248 http://cunit.sourceforge.net/ 00:07:16.248 00:07:16.248 00:07:16.248 Suite: lvol 00:07:16.248 Test: lvs_init_unload_success ...[2024-11-05 16:46:04.898279] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:07:16.248 passed 00:07:16.248 Test: lvs_init_destroy_success ...[2024-11-05 16:46:04.899243] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:07:16.248 passed 00:07:16.248 Test: lvs_init_opts_success ...passed 00:07:16.248 Test: lvs_unload_lvs_is_null_fail ...[2024-11-05 16:46:04.900053] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:07:16.248 passed 00:07:16.248 Test: lvs_names ...[2024-11-05 16:46:04.900397] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:07:16.248 [2024-11-05 16:46:04.900612] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:07:16.248 [2024-11-05 16:46:04.900995] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:07:16.248 passed 00:07:16.248 Test: lvol_create_destroy_success ...passed 00:07:16.248 Test: lvol_create_fail ...[2024-11-05 16:46:04.902126] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:07:16.248 [2024-11-05 16:46:04.902375] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:07:16.248 passed 00:07:16.248 Test: lvol_destroy_fail ...[2024-11-05 16:46:04.903033] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:07:16.248 passed 00:07:16.248 Test: lvol_close ...[2024-11-05 16:46:04.903620] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:07:16.248 [2024-11-05 16:46:04.903803] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:07:16.248 passed 00:07:16.248 Test: lvol_resize ...passed 00:07:16.248 Test: lvol_set_read_only ...passed 00:07:16.248 Test: test_lvs_load ...[2024-11-05 16:46:04.905259] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:07:16.248 [2024-11-05 16:46:04.905480] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:07:16.248 passed 00:07:16.248 Test: lvols_load ...[2024-11-05 16:46:04.906170] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:16.248 [2024-11-05 16:46:04.906404] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:16.248 passed 00:07:16.248 Test: lvol_open ...passed 00:07:16.248 Test: lvol_snapshot ...passed 00:07:16.248 Test: lvol_snapshot_fail ...[2024-11-05 16:46:04.907915] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:07:16.248 passed 00:07:16.248 Test: lvol_clone ...passed 00:07:16.248 Test: lvol_clone_fail ...[2024-11-05 16:46:04.908970] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:07:16.248 passed 00:07:16.249 Test: lvol_iter_clones ...passed 00:07:16.249 Test: lvol_refcnt ...[2024-11-05 16:46:04.910030] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 5e703d37-6764-48a7-9bb5-c758f1dbc594 because it is still open 00:07:16.249 passed 00:07:16.249 Test: lvol_names ...[2024-11-05 16:46:04.910720] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:16.249 [2024-11-05 16:46:04.911003] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:16.249 [2024-11-05 16:46:04.911375] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:07:16.249 passed 00:07:16.249 Test: lvol_create_thin_provisioned ...passed 00:07:16.249 Test: lvol_rename ...[2024-11-05 16:46:04.912381] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:16.249 [2024-11-05 16:46:04.912623] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:07:16.249 passed 00:07:16.249 Test: lvs_rename ...[2024-11-05 16:46:04.913183] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:07:16.249 passed 00:07:16.249 Test: lvol_inflate ...[2024-11-05 16:46:04.913694] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:16.249 passed 00:07:16.249 Test: lvol_decouple_parent ...[2024-11-05 16:46:04.914223] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:16.249 passed 00:07:16.249 Test: lvol_get_xattr ...passed 00:07:16.249 Test: lvol_esnap_reload ...passed 00:07:16.249 Test: lvol_esnap_create_bad_args ...[2024-11-05 16:46:04.915499] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:07:16.249 [2024-11-05 16:46:04.915603] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:16.249 [2024-11-05 16:46:04.915712] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:07:16.249 [2024-11-05 16:46:04.916003] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:16.249 [2024-11-05 16:46:04.916285] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:07:16.249 passed 00:07:16.249 Test: lvol_esnap_create_delete ...passed 00:07:16.249 Test: lvol_esnap_load_esnaps ...[2024-11-05 16:46:04.917102] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:07:16.249 passed 00:07:16.249 Test: lvol_esnap_missing ...[2024-11-05 16:46:04.917533] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:16.249 [2024-11-05 16:46:04.917691] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:16.249 passed 00:07:16.249 Test: lvol_esnap_hotplug ... 00:07:16.249 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:07:16.249 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:07:16.249 [2024-11-05 16:46:04.919030] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 9fc9775a-f127-4c48-8556-db14ad6e547d: failed to create esnap bs_dev: error -12 00:07:16.249 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:07:16.249 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:07:16.249 [2024-11-05 16:46:04.919622] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol d1bdf890-ff92-44c8-b8d6-84da1856b5d0: failed to create esnap bs_dev: error -12 00:07:16.249 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:07:16.249 [2024-11-05 16:46:04.919977] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 996fdcb6-a887-4884-941c-cab65d25a9b9: failed to create esnap bs_dev: error -12 00:07:16.249 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:07:16.249 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:07:16.249 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:07:16.249 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:07:16.249 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:07:16.249 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:07:16.249 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:07:16.249 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:07:16.249 passed 00:07:16.249 Test: lvol_get_by ...passed 00:07:16.249 00:07:16.249 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.249 suites 1 1 n/a 0 0 00:07:16.249 tests 34 34 34 0 0 00:07:16.249 asserts 1439 1439 1439 0 n/a 00:07:16.249 00:07:16.249 Elapsed time = 0.014 seconds 00:07:16.249 ************************************ 00:07:16.249 END TEST unittest_lvol 00:07:16.249 ************************************ 00:07:16.249 00:07:16.249 real 0m0.065s 00:07:16.249 user 0m0.029s 00:07:16.249 sys 0m0.024s 00:07:16.249 16:46:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.249 16:46:04 -- common/autotest_common.sh@10 -- # set +x 00:07:16.249 16:46:04 -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:16.249 16:46:04 -- unit/unittest.sh@226 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:16.249 16:46:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:16.249 16:46:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.249 16:46:04 -- common/autotest_common.sh@10 -- # set +x 00:07:16.249 ************************************ 00:07:16.249 START TEST unittest_nvme_rdma 00:07:16.249 ************************************ 00:07:16.249 16:46:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:16.249 00:07:16.249 00:07:16.249 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.249 http://cunit.sourceforge.net/ 00:07:16.249 00:07:16.249 00:07:16.249 Suite: nvme_rdma 00:07:16.249 Test: test_nvme_rdma_build_sgl_request ...[2024-11-05 16:46:05.012327] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:07:16.249 [2024-11-05 16:46:05.012642] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:16.249 passed 00:07:16.249 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:07:16.249 Test: test_nvme_rdma_build_contig_request ...[2024-11-05 16:46:05.012744] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:07:16.249 [2024-11-05 16:46:05.012828] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:16.249 passed 00:07:16.249 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:07:16.249 Test: test_nvme_rdma_create_reqs ...[2024-11-05 16:46:05.012941] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:07:16.249 passed 00:07:16.249 Test: test_nvme_rdma_create_rsps ...passed 00:07:16.249 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-11-05 16:46:05.013233] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:07:16.249 [2024-11-05 16:46:05.013409] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:16.249 [2024-11-05 16:46:05.013466] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:16.249 passed 00:07:16.249 Test: test_nvme_rdma_poller_create ...passed 00:07:16.249 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:07:16.249 Test: test_nvme_rdma_ctrlr_construct ...[2024-11-05 16:46:05.013607] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:07:16.249 passed 00:07:16.249 Test: test_nvme_rdma_req_put_and_get ...passed 00:07:16.249 Test: test_nvme_rdma_req_init ...passed 00:07:16.249 Test: test_nvme_rdma_validate_cm_event ...[2024-11-05 16:46:05.013878] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:07:16.249 [2024-11-05 16:46:05.013932] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:07:16.249 passed 00:07:16.249 Test: test_nvme_rdma_qpair_init ...passed 00:07:16.249 Test: test_nvme_rdma_qpair_submit_request ...passed 00:07:16.249 Test: test_nvme_rdma_memory_domain ...[2024-11-05 16:46:05.014112] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:07:16.249 passed 00:07:16.249 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:07:16.249 Test: test_rdma_get_memory_translation ...[2024-11-05 16:46:05.014213] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:07:16.249 [2024-11-05 16:46:05.014275] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:07:16.249 passed 00:07:16.249 Test: test_get_rdma_qpair_from_wc ...passed 00:07:16.249 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:07:16.249 Test: test_nvme_rdma_poll_group_get_stats ...[2024-11-05 16:46:05.014351] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:16.249 [2024-11-05 16:46:05.014397] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:16.249 passed 00:07:16.250 Test: test_nvme_rdma_qpair_set_poller ...[2024-11-05 16:46:05.014510] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:16.250 [2024-11-05 16:46:05.014566] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:07:16.250 [2024-11-05 16:46:05.014605] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffea445b9a0 on poll group 0x60b0000001a0 00:07:16.250 [2024-11-05 16:46:05.014666] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:16.250 [2024-11-05 16:46:05.014713] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:07:16.250 [2024-11-05 16:46:05.014751] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffea445b9a0 on poll group 0x60b0000001a0 00:07:16.250 passed 00:07:16.250 00:07:16.250 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.250 suites 1 1 n/a 0 0 00:07:16.250 tests 22 22 22 0 0 00:07:16.250 asserts 412 412 412 0 n/a 00:07:16.250 00:07:16.250 Elapsed time = 0.003 seconds 00:07:16.250 [2024-11-05 16:46:05.014830] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:16.250 00:07:16.250 real 0m0.034s 00:07:16.250 user 0m0.021s 00:07:16.250 sys 0m0.013s 00:07:16.250 ************************************ 00:07:16.250 END TEST unittest_nvme_rdma 00:07:16.250 ************************************ 00:07:16.250 16:46:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.250 16:46:05 -- common/autotest_common.sh@10 -- # set +x 00:07:16.250 16:46:05 -- unit/unittest.sh@227 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:16.250 16:46:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:16.250 16:46:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.250 16:46:05 -- common/autotest_common.sh@10 -- # set +x 00:07:16.250 ************************************ 00:07:16.250 START TEST unittest_nvmf_transport 00:07:16.250 ************************************ 00:07:16.250 16:46:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:16.250 00:07:16.250 00:07:16.250 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.250 http://cunit.sourceforge.net/ 00:07:16.250 00:07:16.250 00:07:16.250 Suite: nvmf 00:07:16.250 Test: test_spdk_nvmf_transport_create ...[2024-11-05 16:46:05.100328] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:07:16.250 [2024-11-05 16:46:05.100694] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:07:16.250 [2024-11-05 16:46:05.100769] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:07:16.250 [2024-11-05 16:46:05.100901] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:07:16.250 passed 00:07:16.250 Test: test_nvmf_transport_poll_group_create ...passed 00:07:16.250 Test: test_spdk_nvmf_transport_opts_init ...[2024-11-05 16:46:05.101174] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:07:16.250 [2024-11-05 16:46:05.101268] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:07:16.250 [2024-11-05 16:46:05.101305] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:07:16.250 passed 00:07:16.250 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:07:16.250 00:07:16.250 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.250 suites 1 1 n/a 0 0 00:07:16.250 tests 4 4 4 0 0 00:07:16.250 asserts 49 49 49 0 n/a 00:07:16.250 00:07:16.250 Elapsed time = 0.001 seconds 00:07:16.250 00:07:16.250 real 0m0.038s 00:07:16.250 user 0m0.012s 00:07:16.250 sys 0m0.025s 00:07:16.250 16:46:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.250 16:46:05 -- common/autotest_common.sh@10 -- # set +x 00:07:16.250 ************************************ 00:07:16.250 END TEST unittest_nvmf_transport 00:07:16.250 ************************************ 00:07:16.509 16:46:05 -- unit/unittest.sh@228 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:16.509 16:46:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:16.509 16:46:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.509 16:46:05 -- common/autotest_common.sh@10 -- # set +x 00:07:16.509 ************************************ 00:07:16.509 START TEST unittest_rdma 00:07:16.509 ************************************ 00:07:16.509 16:46:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:16.509 00:07:16.509 00:07:16.509 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.509 http://cunit.sourceforge.net/ 00:07:16.509 00:07:16.509 00:07:16.509 Suite: rdma_common 00:07:16.509 Test: test_spdk_rdma_pd ...[2024-11-05 16:46:05.184223] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:07:16.509 [2024-11-05 16:46:05.184997] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:07:16.509 passed 00:07:16.509 00:07:16.509 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.509 suites 1 1 n/a 0 0 00:07:16.509 tests 1 1 1 0 0 00:07:16.509 asserts 31 31 31 0 n/a 00:07:16.509 00:07:16.509 Elapsed time = 0.001 seconds 00:07:16.509 00:07:16.509 real 0m0.028s 00:07:16.509 user 0m0.008s 00:07:16.509 sys 0m0.019s 00:07:16.509 16:46:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.509 16:46:05 -- common/autotest_common.sh@10 -- # set +x 00:07:16.509 ************************************ 00:07:16.509 END TEST unittest_rdma 00:07:16.509 ************************************ 00:07:16.509 16:46:05 -- unit/unittest.sh@231 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:16.509 16:46:05 -- unit/unittest.sh@232 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:16.509 16:46:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:16.509 16:46:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.509 16:46:05 -- common/autotest_common.sh@10 -- # set +x 00:07:16.509 ************************************ 00:07:16.509 START TEST unittest_nvme_cuse 00:07:16.509 ************************************ 00:07:16.510 16:46:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:16.510 00:07:16.510 00:07:16.510 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.510 http://cunit.sourceforge.net/ 00:07:16.510 00:07:16.510 00:07:16.510 Suite: nvme_cuse 00:07:16.510 Test: test_cuse_nvme_submit_io_read_write ...passed 00:07:16.510 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:07:16.510 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:07:16.510 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:07:16.510 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:07:16.510 Test: test_cuse_nvme_submit_io ...[2024-11-05 16:46:05.274778] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:07:16.510 passed 00:07:16.510 Test: test_cuse_nvme_reset ...passed 00:07:16.510 Test: test_nvme_cuse_stop ...[2024-11-05 16:46:05.275137] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:07:16.510 passed 00:07:16.510 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:07:16.510 00:07:16.510 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.510 suites 1 1 n/a 0 0 00:07:16.510 tests 9 9 9 0 0 00:07:16.510 asserts 121 121 121 0 n/a 00:07:16.510 00:07:16.510 Elapsed time = 0.002 seconds 00:07:16.510 00:07:16.510 real 0m0.034s 00:07:16.510 user 0m0.025s 00:07:16.510 sys 0m0.009s 00:07:16.510 16:46:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.510 16:46:05 -- common/autotest_common.sh@10 -- # set +x 00:07:16.510 ************************************ 00:07:16.510 END TEST unittest_nvme_cuse 00:07:16.510 ************************************ 00:07:16.510 16:46:05 -- unit/unittest.sh@235 -- # run_test unittest_nvmf unittest_nvmf 00:07:16.510 16:46:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:16.510 16:46:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.510 16:46:05 -- common/autotest_common.sh@10 -- # set +x 00:07:16.510 ************************************ 00:07:16.510 START TEST unittest_nvmf 00:07:16.510 ************************************ 00:07:16.510 16:46:05 -- common/autotest_common.sh@1114 -- # unittest_nvmf 00:07:16.510 16:46:05 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:07:16.510 00:07:16.510 00:07:16.510 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.510 http://cunit.sourceforge.net/ 00:07:16.510 00:07:16.510 00:07:16.510 Suite: nvmf 00:07:16.510 Test: test_get_log_page ...[2024-11-05 16:46:05.358518] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:07:16.510 passed 00:07:16.510 Test: test_process_fabrics_cmd ...passed 00:07:16.510 Test: test_connect ...[2024-11-05 16:46:05.359373] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:07:16.510 [2024-11-05 16:46:05.359527] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:07:16.510 [2024-11-05 16:46:05.359580] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:07:16.510 [2024-11-05 16:46:05.359625] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:07:16.510 [2024-11-05 16:46:05.359718] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:07:16.510 [2024-11-05 16:46:05.359759] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:07:16.510 [2024-11-05 16:46:05.359885] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:07:16.510 [2024-11-05 16:46:05.359943] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:07:16.510 [2024-11-05 16:46:05.360050] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:07:16.510 [2024-11-05 16:46:05.360138] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:07:16.510 [2024-11-05 16:46:05.360378] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:07:16.510 [2024-11-05 16:46:05.360476] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:07:16.510 [2024-11-05 16:46:05.360578] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:07:16.510 [2024-11-05 16:46:05.360654] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:07:16.510 [2024-11-05 16:46:05.360755] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:07:16.510 [2024-11-05 16:46:05.360897] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:07:16.510 passed 00:07:16.510 Test: test_get_ns_id_desc_list ...passed 00:07:16.510 Test: test_identify_ns ...[2024-11-05 16:46:05.361132] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:16.510 [2024-11-05 16:46:05.361352] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:07:16.510 [2024-11-05 16:46:05.361508] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:07:16.510 passed 00:07:16.510 Test: test_identify_ns_iocs_specific ...[2024-11-05 16:46:05.361653] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:16.510 [2024-11-05 16:46:05.361944] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:16.510 passed 00:07:16.510 Test: test_reservation_write_exclusive ...passed 00:07:16.510 Test: test_reservation_exclusive_access ...passed 00:07:16.510 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:07:16.510 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:07:16.510 Test: test_reservation_notification_log_page ...passed 00:07:16.510 Test: test_get_dif_ctx ...passed 00:07:16.510 Test: test_set_get_features ...[2024-11-05 16:46:05.362430] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:16.510 [2024-11-05 16:46:05.362490] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:16.510 [2024-11-05 16:46:05.362538] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:07:16.510 passed 00:07:16.510 Test: test_identify_ctrlr ...[2024-11-05 16:46:05.362606] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:07:16.510 passed 00:07:16.510 Test: test_identify_ctrlr_iocs_specific ...passed 00:07:16.510 Test: test_custom_admin_cmd ...passed 00:07:16.510 Test: test_fused_compare_and_write ...[2024-11-05 16:46:05.363081] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:07:16.510 [2024-11-05 16:46:05.363154] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:16.510 passed 00:07:16.510 Test: test_multi_async_event_reqs ...passed 00:07:16.510 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:07:16.510 Test: test_get_ana_log_page_multi_ns_per_anagrp ...[2024-11-05 16:46:05.363206] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:16.510 passed 00:07:16.510 Test: test_multi_async_events ...passed 00:07:16.510 Test: test_rae ...passed 00:07:16.510 Test: test_nvmf_ctrlr_create_destruct ...passed 00:07:16.510 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:07:16.510 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:07:16.510 Test: test_zcopy_read ...passed 00:07:16.510 Test: test_zcopy_write ...passed 00:07:16.510 Test: test_nvmf_property_set ...passed 00:07:16.510 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-11-05 16:46:05.363707] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:07:16.510 [2024-11-05 16:46:05.363867] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:16.510 [2024-11-05 16:46:05.363945] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:16.510 passed 00:07:16.510 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:07:16.510 00:07:16.510 [2024-11-05 16:46:05.364002] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:07:16.510 [2024-11-05 16:46:05.364048] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:07:16.510 [2024-11-05 16:46:05.364085] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:07:16.510 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.510 suites 1 1 n/a 0 0 00:07:16.510 tests 30 30 30 0 0 00:07:16.510 asserts 885 885 885 0 n/a 00:07:16.510 00:07:16.510 Elapsed time = 0.006 seconds 00:07:16.510 16:46:05 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:07:16.510 00:07:16.510 00:07:16.510 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.510 http://cunit.sourceforge.net/ 00:07:16.510 00:07:16.510 00:07:16.510 Suite: nvmf 00:07:16.510 Test: test_get_rw_params ...passed 00:07:16.510 Test: test_lba_in_range ...passed 00:07:16.510 Test: test_get_dif_ctx ...passed 00:07:16.511 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:07:16.511 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-11-05 16:46:05.394668] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:07:16.771 [2024-11-05 16:46:05.395045] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:07:16.771 passed 00:07:16.771 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-11-05 16:46:05.395153] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:07:16.771 [2024-11-05 16:46:05.395215] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:07:16.771 [2024-11-05 16:46:05.395339] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:07:16.771 passed 00:07:16.771 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-11-05 16:46:05.395466] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:07:16.771 [2024-11-05 16:46:05.395520] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:07:16.771 [2024-11-05 16:46:05.395595] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:07:16.771 passed 00:07:16.771 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:07:16.771 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:07:16.771 00:07:16.771 [2024-11-05 16:46:05.395636] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:07:16.771 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.771 suites 1 1 n/a 0 0 00:07:16.771 tests 9 9 9 0 0 00:07:16.771 asserts 157 157 157 0 n/a 00:07:16.771 00:07:16.771 Elapsed time = 0.001 seconds 00:07:16.771 16:46:05 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:07:16.771 00:07:16.771 00:07:16.771 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.771 http://cunit.sourceforge.net/ 00:07:16.771 00:07:16.771 00:07:16.771 Suite: nvmf 00:07:16.771 Test: test_discovery_log ...passed 00:07:16.771 Test: test_discovery_log_with_filters ...passed 00:07:16.771 00:07:16.771 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.771 suites 1 1 n/a 0 0 00:07:16.771 tests 2 2 2 0 0 00:07:16.771 asserts 238 238 238 0 n/a 00:07:16.771 00:07:16.771 Elapsed time = 0.003 seconds 00:07:16.771 16:46:05 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:07:16.771 00:07:16.771 00:07:16.771 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.771 http://cunit.sourceforge.net/ 00:07:16.771 00:07:16.771 00:07:16.771 Suite: nvmf 00:07:16.771 Test: nvmf_test_create_subsystem ...[2024-11-05 16:46:05.465211] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:07:16.771 [2024-11-05 16:46:05.465530] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:07:16.771 [2024-11-05 16:46:05.465625] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:07:16.771 [2024-11-05 16:46:05.465670] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:07:16.771 [2024-11-05 16:46:05.465706] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:07:16.771 [2024-11-05 16:46:05.465763] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:07:16.772 [2024-11-05 16:46:05.465871] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:07:16.772 [2024-11-05 16:46:05.466049] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:07:16.772 [2024-11-05 16:46:05.466158] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:07:16.772 passed 00:07:16.772 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-11-05 16:46:05.466213] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:16.772 [2024-11-05 16:46:05.466251] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:16.772 [2024-11-05 16:46:05.466396] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:07:16.772 [2024-11-05 16:46:05.466499] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:07:16.772 passed 00:07:16.772 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:07:16.772 Test: test_reservation_register ...[2024-11-05 16:46:05.466730] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:16.772 [2024-11-05 16:46:05.466845] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:07:16.772 passed 00:07:16.772 Test: test_reservation_register_with_ptpl ...passed 00:07:16.772 Test: test_reservation_acquire_preempt_1 ...[2024-11-05 16:46:05.467816] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:16.772 passed 00:07:16.772 Test: test_reservation_acquire_release_with_ptpl ...passed 00:07:16.772 Test: test_reservation_release ...[2024-11-05 16:46:05.469706] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:16.772 passed 00:07:16.772 Test: test_reservation_unregister_notification ...[2024-11-05 16:46:05.469941] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:16.772 passed 00:07:16.772 Test: test_reservation_release_notification ...[2024-11-05 16:46:05.470196] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:16.772 passed 00:07:16.772 Test: test_reservation_release_notification_write_exclusive ...[2024-11-05 16:46:05.470440] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:16.772 passed 00:07:16.772 Test: test_reservation_clear_notification ...[2024-11-05 16:46:05.470676] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:16.772 passed 00:07:16.772 Test: test_reservation_preempt_notification ...[2024-11-05 16:46:05.470952] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:16.772 passed 00:07:16.772 Test: test_spdk_nvmf_ns_event ...passed 00:07:16.772 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:07:16.772 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:07:16.772 Test: test_spdk_nvmf_subsystem_add_host ...[2024-11-05 16:46:05.471640] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:07:16.772 passed 00:07:16.772 Test: test_nvmf_ns_reservation_report ...[2024-11-05 16:46:05.471760] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:07:16.772 passed 00:07:16.772 Test: test_nvmf_nqn_is_valid ...[2024-11-05 16:46:05.471905] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:07:16.772 [2024-11-05 16:46:05.471997] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:07:16.772 [2024-11-05 16:46:05.472052] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:159edb0e-54f0-4e58-be00-196e9840d5c": uuid is not the correct length 00:07:16.772 [2024-11-05 16:46:05.472088] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:07:16.772 passed 00:07:16.772 Test: test_nvmf_ns_reservation_restore ...[2024-11-05 16:46:05.472197] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:07:16.772 passed 00:07:16.772 Test: test_nvmf_subsystem_state_change ...passed 00:07:16.772 Test: test_nvmf_reservation_custom_ops ...passed 00:07:16.772 00:07:16.772 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.772 suites 1 1 n/a 0 0 00:07:16.772 tests 22 22 22 0 0 00:07:16.772 asserts 407 407 407 0 n/a 00:07:16.772 00:07:16.772 Elapsed time = 0.008 seconds 00:07:16.772 16:46:05 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:07:16.772 00:07:16.772 00:07:16.772 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.772 http://cunit.sourceforge.net/ 00:07:16.772 00:07:16.772 00:07:16.772 Suite: nvmf 00:07:16.772 Test: test_nvmf_tcp_create ...[2024-11-05 16:46:05.529310] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 732:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:07:16.772 passed 00:07:16.772 Test: test_nvmf_tcp_destroy ...passed 00:07:16.772 Test: test_nvmf_tcp_poll_group_create ...passed 00:07:16.772 Test: test_nvmf_tcp_send_c2h_data ...passed 00:07:16.772 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:07:16.772 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:07:16.772 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:07:16.772 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-11-05 16:46:05.627946] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.772 [2024-11-05 16:46:05.628047] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f75c30 is same with the state(5) to be set 00:07:16.772 [2024-11-05 16:46:05.628145] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f75c30 is same with the state(5) to be set 00:07:16.772 [2024-11-05 16:46:05.628195] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.772 passed 00:07:16.772 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:07:16.772 Test: test_nvmf_tcp_icreq_handle ...[2024-11-05 16:46:05.628233] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f75c30 is same with the state(5) to be set 00:07:16.772 [2024-11-05 16:46:05.628322] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:16.772 [2024-11-05 16:46:05.628414] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.772 [2024-11-05 16:46:05.628490] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f75c30 is same with the state(5) to be set 00:07:16.772 [2024-11-05 16:46:05.628529] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:16.772 [2024-11-05 16:46:05.628570] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f75c30 is same with the state(5) to be set 00:07:16.772 [2024-11-05 16:46:05.628609] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.772 [2024-11-05 16:46:05.628651] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f75c30 is same with the state(5) to be set 00:07:16.772 passed 00:07:16.772 Test: test_nvmf_tcp_check_xfer_type ...passed 00:07:16.772 Test: test_nvmf_tcp_invalid_sgl ...[2024-11-05 16:46:05.628691] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:07:16.772 [2024-11-05 16:46:05.628752] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f75c30 is same with the state(5) to be set 00:07:16.772 [2024-11-05 16:46:05.628822] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2486:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:07:16.772 [2024-11-05 16:46:05.628876] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.772 [2024-11-05 16:46:05.628915] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f75c30 is same with the state(5) to be set 00:07:16.772 passed 00:07:16.772 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-11-05 16:46:05.628974] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2218:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffd88f76990 00:07:16.772 [2024-11-05 16:46:05.629063] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.772 [2024-11-05 16:46:05.629115] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f760f0 is same with the state(5) to be set 00:07:16.772 [2024-11-05 16:46:05.629158] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2275:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffd88f760f0 00:07:16.772 [2024-11-05 16:46:05.629203] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.772 [2024-11-05 16:46:05.629245] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f760f0 is same with the state(5) to be set 00:07:16.772 [2024-11-05 16:46:05.629293] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2228:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:07:16.772 [2024-11-05 16:46:05.629341] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.772 [2024-11-05 16:46:05.629396] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f760f0 is same with the state(5) to be set 00:07:16.773 [2024-11-05 16:46:05.629444] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2267:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:07:16.773 [2024-11-05 16:46:05.629484] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.773 [2024-11-05 16:46:05.629525] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f760f0 is same with the state(5) to be set 00:07:16.773 [2024-11-05 16:46:05.629569] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.773 [2024-11-05 16:46:05.629612] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f760f0 is same with the state(5) to be set 00:07:16.773 [2024-11-05 16:46:05.629675] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.773 [2024-11-05 16:46:05.629707] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f760f0 is same with the state(5) to be set 00:07:16.773 [2024-11-05 16:46:05.629775] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.773 [2024-11-05 16:46:05.629814] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f760f0 is same with the state(5) to be set 00:07:16.773 [2024-11-05 16:46:05.629859] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.773 [2024-11-05 16:46:05.629894] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f760f0 is same with the state(5) to be set 00:07:16.773 [2024-11-05 16:46:05.629957] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.773 [2024-11-05 16:46:05.630003] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f760f0 is same with the state(5) to be set 00:07:16.773 passed 00:07:16.773 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-11-05 16:46:05.630055] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.773 [2024-11-05 16:46:05.630093] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd88f760f0 is same with the state(5) to be set 00:07:16.773 passed 00:07:16.773 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-11-05 16:46:05.653349] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:07:16.773 passed 00:07:16.773 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-11-05 16:46:05.653440] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:07:16.773 [2024-11-05 16:46:05.653858] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:07:16.773 [2024-11-05 16:46:05.653922] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:07:16.773 passed 00:07:16.773 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-11-05 16:46:05.654181] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:07:16.773 passed[2024-11-05 16:46:05.654242] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:07:16.773 00:07:16.773 00:07:16.773 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.773 suites 1 1 n/a 0 0 00:07:16.773 tests 17 17 17 0 0 00:07:16.773 asserts 222 222 222 0 n/a 00:07:16.773 00:07:16.773 Elapsed time = 0.148 seconds 00:07:17.032 16:46:05 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:07:17.032 00:07:17.032 00:07:17.032 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.032 http://cunit.sourceforge.net/ 00:07:17.032 00:07:17.032 00:07:17.032 Suite: nvmf 00:07:17.032 Test: test_nvmf_tgt_create_poll_group ...passed 00:07:17.032 00:07:17.032 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.032 suites 1 1 n/a 0 0 00:07:17.032 tests 1 1 1 0 0 00:07:17.032 asserts 17 17 17 0 n/a 00:07:17.032 00:07:17.032 Elapsed time = 0.022 seconds 00:07:17.032 00:07:17.032 real 0m0.465s 00:07:17.032 user 0m0.217s 00:07:17.032 sys 0m0.250s 00:07:17.032 16:46:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.032 16:46:05 -- common/autotest_common.sh@10 -- # set +x 00:07:17.032 ************************************ 00:07:17.032 END TEST unittest_nvmf 00:07:17.032 ************************************ 00:07:17.032 16:46:05 -- unit/unittest.sh@236 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:17.032 16:46:05 -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:17.032 16:46:05 -- unit/unittest.sh@242 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:17.032 16:46:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.032 16:46:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.032 16:46:05 -- common/autotest_common.sh@10 -- # set +x 00:07:17.032 ************************************ 00:07:17.032 START TEST unittest_nvmf_rdma 00:07:17.032 ************************************ 00:07:17.032 16:46:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:17.032 00:07:17.032 00:07:17.032 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.032 http://cunit.sourceforge.net/ 00:07:17.032 00:07:17.032 00:07:17.032 Suite: nvmf 00:07:17.032 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-11-05 16:46:05.879725] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:07:17.032 [2024-11-05 16:46:05.880410] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:07:17.032 [2024-11-05 16:46:05.880480] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:07:17.032 passed 00:07:17.032 Test: test_spdk_nvmf_rdma_request_process ...passed 00:07:17.032 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:07:17.032 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:07:17.032 Test: test_nvmf_rdma_opts_init ...passed 00:07:17.032 Test: test_nvmf_rdma_request_free_data ...passed 00:07:17.032 Test: test_nvmf_rdma_update_ibv_state ...[2024-11-05 16:46:05.881914] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 616:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:07:17.032 [2024-11-05 16:46:05.881964] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 627:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:07:17.032 passed 00:07:17.032 Test: test_nvmf_rdma_resources_create ...passed 00:07:17.032 Test: test_nvmf_rdma_qpair_compare ...passed 00:07:17.032 Test: test_nvmf_rdma_resize_cq ...[2024-11-05 16:46:05.883267] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1008:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:07:17.032 Using CQ of insufficient size may lead to CQ overrun 00:07:17.032 passed 00:07:17.032 00:07:17.032 [2024-11-05 16:46:05.883393] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1013:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:07:17.032 [2024-11-05 16:46:05.883518] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1021:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:17.032 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.032 suites 1 1 n/a 0 0 00:07:17.032 tests 10 10 10 0 0 00:07:17.032 asserts 584 584 584 0 n/a 00:07:17.032 00:07:17.032 Elapsed time = 0.004 seconds 00:07:17.032 00:07:17.032 real 0m0.038s 00:07:17.032 user 0m0.029s 00:07:17.032 sys 0m0.009s 00:07:17.032 16:46:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.032 16:46:05 -- common/autotest_common.sh@10 -- # set +x 00:07:17.032 ************************************ 00:07:17.032 END TEST unittest_nvmf_rdma 00:07:17.032 ************************************ 00:07:17.291 16:46:05 -- unit/unittest.sh@245 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:17.291 16:46:05 -- unit/unittest.sh@249 -- # run_test unittest_scsi unittest_scsi 00:07:17.291 16:46:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.291 16:46:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.291 16:46:05 -- common/autotest_common.sh@10 -- # set +x 00:07:17.291 ************************************ 00:07:17.291 START TEST unittest_scsi 00:07:17.291 ************************************ 00:07:17.291 16:46:05 -- common/autotest_common.sh@1114 -- # unittest_scsi 00:07:17.291 16:46:05 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:07:17.291 00:07:17.291 00:07:17.291 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.291 http://cunit.sourceforge.net/ 00:07:17.291 00:07:17.291 00:07:17.291 Suite: dev_suite 00:07:17.291 Test: dev_destruct_null_dev ...passed 00:07:17.291 Test: dev_destruct_zero_luns ...passed 00:07:17.291 Test: dev_destruct_null_lun ...passed 00:07:17.291 Test: dev_destruct_success ...passed 00:07:17.291 Test: dev_construct_num_luns_zero ...[2024-11-05 16:46:05.968418] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:07:17.291 passed 00:07:17.291 Test: dev_construct_no_lun_zero ...[2024-11-05 16:46:05.968714] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:07:17.291 passed 00:07:17.291 Test: dev_construct_null_lun ...passed 00:07:17.291 Test: dev_construct_name_too_long ...[2024-11-05 16:46:05.968769] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:07:17.291 [2024-11-05 16:46:05.968818] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:07:17.291 passed 00:07:17.291 Test: dev_construct_success ...passed 00:07:17.291 Test: dev_construct_success_lun_zero_not_first ...passed 00:07:17.291 Test: dev_queue_mgmt_task_success ...passed 00:07:17.291 Test: dev_queue_task_success ...passed 00:07:17.291 Test: dev_stop_success ...passed 00:07:17.291 Test: dev_add_port_max_ports ...[2024-11-05 16:46:05.969086] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:07:17.291 passed 00:07:17.291 Test: dev_add_port_construct_failure1 ...passed 00:07:17.291 Test: dev_add_port_construct_failure2 ...[2024-11-05 16:46:05.969193] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:07:17.291 [2024-11-05 16:46:05.969282] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:07:17.291 passed 00:07:17.291 Test: dev_add_port_success1 ...passed 00:07:17.291 Test: dev_add_port_success2 ...passed 00:07:17.291 Test: dev_add_port_success3 ...passed 00:07:17.291 Test: dev_find_port_by_id_num_ports_zero ...passed 00:07:17.291 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:07:17.291 Test: dev_find_port_by_id_success ...passed 00:07:17.291 Test: dev_add_lun_bdev_not_found ...passed 00:07:17.291 Test: dev_add_lun_no_free_lun_id ...[2024-11-05 16:46:05.969618] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:07:17.291 passed 00:07:17.291 Test: dev_add_lun_success1 ...passed 00:07:17.291 Test: dev_add_lun_success2 ...passed 00:07:17.291 Test: dev_check_pending_tasks ...passed 00:07:17.291 Test: dev_iterate_luns ...passed 00:07:17.291 Test: dev_find_free_lun ...passed 00:07:17.291 00:07:17.291 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.291 suites 1 1 n/a 0 0 00:07:17.291 tests 29 29 29 0 0 00:07:17.291 asserts 97 97 97 0 n/a 00:07:17.291 00:07:17.291 Elapsed time = 0.002 seconds 00:07:17.291 16:46:05 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:07:17.291 00:07:17.291 00:07:17.291 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.291 http://cunit.sourceforge.net/ 00:07:17.291 00:07:17.291 00:07:17.291 Suite: lun_suite 00:07:17.291 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-11-05 16:46:05.999620] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:07:17.291 passed 00:07:17.291 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-11-05 16:46:05.999944] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:07:17.291 passed 00:07:17.291 Test: lun_task_mgmt_execute_lun_reset ...passed 00:07:17.291 Test: lun_task_mgmt_execute_target_reset ...passed 00:07:17.291 Test: lun_task_mgmt_execute_invalid_case ...passed 00:07:17.291 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...[2024-11-05 16:46:06.000106] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:07:17.291 passed 00:07:17.291 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:07:17.291 Test: lun_append_task_null_lun_not_supported ...passed 00:07:17.291 Test: lun_execute_scsi_task_pending ...passed 00:07:17.291 Test: lun_execute_scsi_task_complete ...passed 00:07:17.291 Test: lun_execute_scsi_task_resize ...passed 00:07:17.291 Test: lun_destruct_success ...passed 00:07:17.291 Test: lun_construct_null_ctx ...passed 00:07:17.291 Test: lun_construct_success ...passed 00:07:17.291 Test: lun_reset_task_wait_scsi_task_complete ...[2024-11-05 16:46:06.000290] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:07:17.291 passed 00:07:17.291 Test: lun_reset_task_suspend_scsi_task ...passed 00:07:17.291 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:07:17.291 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:07:17.291 00:07:17.292 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.292 suites 1 1 n/a 0 0 00:07:17.292 tests 18 18 18 0 0 00:07:17.292 asserts 153 153 153 0 n/a 00:07:17.292 00:07:17.292 Elapsed time = 0.001 seconds 00:07:17.292 16:46:06 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:07:17.292 00:07:17.292 00:07:17.292 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.292 http://cunit.sourceforge.net/ 00:07:17.292 00:07:17.292 00:07:17.292 Suite: scsi_suite 00:07:17.292 Test: scsi_init ...passed 00:07:17.292 00:07:17.292 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.292 suites 1 1 n/a 0 0 00:07:17.292 tests 1 1 1 0 0 00:07:17.292 asserts 1 1 1 0 n/a 00:07:17.292 00:07:17.292 Elapsed time = 0.000 seconds 00:07:17.292 16:46:06 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:07:17.292 00:07:17.292 00:07:17.292 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.292 http://cunit.sourceforge.net/ 00:07:17.292 00:07:17.292 00:07:17.292 Suite: translation_suite 00:07:17.292 Test: mode_select_6_test ...passed 00:07:17.292 Test: mode_select_6_test2 ...passed 00:07:17.292 Test: mode_sense_6_test ...passed 00:07:17.292 Test: mode_sense_10_test ...passed 00:07:17.292 Test: inquiry_evpd_test ...passed 00:07:17.292 Test: inquiry_standard_test ...passed 00:07:17.292 Test: inquiry_overflow_test ...passed 00:07:17.292 Test: task_complete_test ...passed 00:07:17.292 Test: lba_range_test ...passed 00:07:17.292 Test: xfer_len_test ...[2024-11-05 16:46:06.055104] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:07:17.292 passed 00:07:17.292 Test: xfer_test ...passed 00:07:17.292 Test: scsi_name_padding_test ...passed 00:07:17.292 Test: get_dif_ctx_test ...passed 00:07:17.292 Test: unmap_split_test ...passed 00:07:17.292 00:07:17.292 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.292 suites 1 1 n/a 0 0 00:07:17.292 tests 14 14 14 0 0 00:07:17.292 asserts 1200 1200 1200 0 n/a 00:07:17.292 00:07:17.292 Elapsed time = 0.003 seconds 00:07:17.292 16:46:06 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:07:17.292 00:07:17.292 00:07:17.292 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.292 http://cunit.sourceforge.net/ 00:07:17.292 00:07:17.292 00:07:17.292 Suite: reservation_suite 00:07:17.292 Test: test_reservation_register ...[2024-11-05 16:46:06.081914] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:17.292 passed 00:07:17.292 Test: test_reservation_reserve ...[2024-11-05 16:46:06.082307] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:17.292 [2024-11-05 16:46:06.082405] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:07:17.292 [2024-11-05 16:46:06.082557] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:07:17.292 passed 00:07:17.292 Test: test_reservation_preempt_non_all_regs ...[2024-11-05 16:46:06.082664] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:17.292 [2024-11-05 16:46:06.082750] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:07:17.292 passed 00:07:17.292 Test: test_reservation_preempt_all_regs ...[2024-11-05 16:46:06.082938] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:17.292 passed 00:07:17.292 Test: test_reservation_cmds_conflict ...[2024-11-05 16:46:06.083120] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:17.292 [2024-11-05 16:46:06.083209] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:07:17.292 [2024-11-05 16:46:06.083268] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:17.292 [2024-11-05 16:46:06.083313] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:17.292 [2024-11-05 16:46:06.083375] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:17.292 passed 00:07:17.292 Test: test_scsi2_reserve_release ...passed 00:07:17.292 Test: test_pr_with_scsi2_reserve_release ...[2024-11-05 16:46:06.083421] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:17.292 passed 00:07:17.292 00:07:17.292 [2024-11-05 16:46:06.083572] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:17.292 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.292 suites 1 1 n/a 0 0 00:07:17.292 tests 7 7 7 0 0 00:07:17.292 asserts 257 257 257 0 n/a 00:07:17.292 00:07:17.292 Elapsed time = 0.002 seconds 00:07:17.292 00:07:17.292 real 0m0.146s 00:07:17.292 user 0m0.088s 00:07:17.292 sys 0m0.060s 00:07:17.292 16:46:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.292 16:46:06 -- common/autotest_common.sh@10 -- # set +x 00:07:17.292 ************************************ 00:07:17.292 END TEST unittest_scsi 00:07:17.292 ************************************ 00:07:17.292 16:46:06 -- unit/unittest.sh@252 -- # uname -s 00:07:17.292 16:46:06 -- unit/unittest.sh@252 -- # '[' Linux = Linux ']' 00:07:17.292 16:46:06 -- unit/unittest.sh@253 -- # run_test unittest_sock unittest_sock 00:07:17.292 16:46:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.292 16:46:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.292 16:46:06 -- common/autotest_common.sh@10 -- # set +x 00:07:17.292 ************************************ 00:07:17.292 START TEST unittest_sock 00:07:17.292 ************************************ 00:07:17.292 16:46:06 -- common/autotest_common.sh@1114 -- # unittest_sock 00:07:17.292 16:46:06 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:07:17.292 00:07:17.292 00:07:17.292 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.292 http://cunit.sourceforge.net/ 00:07:17.292 00:07:17.292 00:07:17.292 Suite: sock 00:07:17.292 Test: posix_sock ...passed 00:07:17.551 Test: ut_sock ...passed 00:07:17.551 Test: posix_sock_group ...passed 00:07:17.551 Test: ut_sock_group ...passed 00:07:17.551 Test: posix_sock_group_fairness ...passed 00:07:17.551 Test: _posix_sock_close ...passed 00:07:17.551 Test: sock_get_default_opts ...passed 00:07:17.551 Test: ut_sock_impl_get_set_opts ...passed 00:07:17.551 Test: posix_sock_impl_get_set_opts ...passed 00:07:17.551 Test: ut_sock_map ...passed 00:07:17.551 Test: override_impl_opts ...passed 00:07:17.551 Test: ut_sock_group_get_ctx ...passed 00:07:17.551 00:07:17.551 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.551 suites 1 1 n/a 0 0 00:07:17.551 tests 12 12 12 0 0 00:07:17.551 asserts 349 349 349 0 n/a 00:07:17.551 00:07:17.551 Elapsed time = 0.008 seconds 00:07:17.551 16:46:06 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:07:17.551 00:07:17.551 00:07:17.551 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.551 http://cunit.sourceforge.net/ 00:07:17.551 00:07:17.551 00:07:17.551 Suite: posix 00:07:17.551 Test: flush ...passed 00:07:17.551 00:07:17.551 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.551 suites 1 1 n/a 0 0 00:07:17.551 tests 1 1 1 0 0 00:07:17.551 asserts 28 28 28 0 n/a 00:07:17.551 00:07:17.551 Elapsed time = 0.000 seconds 00:07:17.551 16:46:06 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:17.551 00:07:17.551 real 0m0.086s 00:07:17.551 user 0m0.043s 00:07:17.551 sys 0m0.020s 00:07:17.551 16:46:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.551 16:46:06 -- common/autotest_common.sh@10 -- # set +x 00:07:17.551 ************************************ 00:07:17.551 END TEST unittest_sock 00:07:17.551 ************************************ 00:07:17.551 16:46:06 -- unit/unittest.sh@255 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:17.551 16:46:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.551 16:46:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.551 16:46:06 -- common/autotest_common.sh@10 -- # set +x 00:07:17.551 ************************************ 00:07:17.551 START TEST unittest_thread 00:07:17.551 ************************************ 00:07:17.551 16:46:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:17.551 00:07:17.551 00:07:17.551 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.551 http://cunit.sourceforge.net/ 00:07:17.551 00:07:17.551 00:07:17.551 Suite: io_channel 00:07:17.551 Test: thread_alloc ...passed 00:07:17.551 Test: thread_send_msg ...passed 00:07:17.551 Test: thread_poller ...passed 00:07:17.551 Test: poller_pause ...passed 00:07:17.551 Test: thread_for_each ...passed 00:07:17.551 Test: for_each_channel_remove ...passed 00:07:17.551 Test: for_each_channel_unreg ...[2024-11-05 16:46:06.322121] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x7ffe80c7b190 already registered (old:0x613000000200 new:0x6130000003c0) 00:07:17.551 passed 00:07:17.551 Test: thread_name ...passed 00:07:17.551 Test: channel ...[2024-11-05 16:46:06.326168] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2299:spdk_get_io_channel: *ERROR*: could not find io_device 0x55b9179fa0e0 00:07:17.551 passed 00:07:17.551 Test: channel_destroy_races ...passed 00:07:17.551 Test: thread_exit_test ...[2024-11-05 16:46:06.331181] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 631:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:07:17.551 passed 00:07:17.551 Test: thread_update_stats_test ...passed 00:07:17.551 Test: nested_channel ...passed 00:07:17.551 Test: device_unregister_and_thread_exit_race ...passed 00:07:17.551 Test: cache_closest_timed_poller ...passed 00:07:17.551 Test: multi_timed_pollers_have_same_expiration ...passed 00:07:17.551 Test: io_device_lookup ...passed 00:07:17.551 Test: spdk_spin ...[2024-11-05 16:46:06.341882] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3063:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:17.551 [2024-11-05 16:46:06.341946] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffe80c7b180 00:07:17.551 [2024-11-05 16:46:06.342060] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3101:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:17.551 [2024-11-05 16:46:06.343737] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:17.551 [2024-11-05 16:46:06.343832] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffe80c7b180 00:07:17.551 [2024-11-05 16:46:06.343882] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:17.551 [2024-11-05 16:46:06.343927] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffe80c7b180 00:07:17.551 [2024-11-05 16:46:06.343965] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:17.551 [2024-11-05 16:46:06.344011] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffe80c7b180 00:07:17.551 [2024-11-05 16:46:06.344045] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3045:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:07:17.551 [2024-11-05 16:46:06.344104] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffe80c7b180 00:07:17.551 passed 00:07:17.551 Test: for_each_channel_and_thread_exit_race ...passed 00:07:17.551 Test: for_each_thread_and_thread_exit_race ...passed 00:07:17.551 00:07:17.551 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.551 suites 1 1 n/a 0 0 00:07:17.551 tests 20 20 20 0 0 00:07:17.551 asserts 409 409 409 0 n/a 00:07:17.551 00:07:17.551 Elapsed time = 0.049 seconds 00:07:17.551 00:07:17.551 real 0m0.090s 00:07:17.551 user 0m0.069s 00:07:17.551 sys 0m0.021s 00:07:17.551 16:46:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.551 16:46:06 -- common/autotest_common.sh@10 -- # set +x 00:07:17.551 ************************************ 00:07:17.551 END TEST unittest_thread 00:07:17.551 ************************************ 00:07:17.551 16:46:06 -- unit/unittest.sh@256 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:17.551 16:46:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.551 16:46:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.551 16:46:06 -- common/autotest_common.sh@10 -- # set +x 00:07:17.551 ************************************ 00:07:17.551 START TEST unittest_iobuf 00:07:17.551 ************************************ 00:07:17.551 16:46:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:17.865 00:07:17.865 00:07:17.865 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.865 http://cunit.sourceforge.net/ 00:07:17.865 00:07:17.865 00:07:17.865 Suite: io_channel 00:07:17.865 Test: iobuf ...passed 00:07:17.865 Test: iobuf_cache ...[2024-11-05 16:46:06.447185] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:17.865 [2024-11-05 16:46:06.447506] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:17.865 [2024-11-05 16:46:06.447649] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:07:17.865 [2024-11-05 16:46:06.447702] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:17.865 [2024-11-05 16:46:06.447779] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:17.865 [2024-11-05 16:46:06.447826] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:17.865 passed 00:07:17.865 00:07:17.865 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.865 suites 1 1 n/a 0 0 00:07:17.865 tests 2 2 2 0 0 00:07:17.865 asserts 107 107 107 0 n/a 00:07:17.865 00:07:17.865 Elapsed time = 0.006 seconds 00:07:17.865 00:07:17.865 real 0m0.039s 00:07:17.865 user 0m0.022s 00:07:17.865 sys 0m0.017s 00:07:17.865 16:46:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.865 16:46:06 -- common/autotest_common.sh@10 -- # set +x 00:07:17.865 ************************************ 00:07:17.865 END TEST unittest_iobuf 00:07:17.865 ************************************ 00:07:17.865 16:46:06 -- unit/unittest.sh@257 -- # run_test unittest_util unittest_util 00:07:17.865 16:46:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.865 16:46:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.865 16:46:06 -- common/autotest_common.sh@10 -- # set +x 00:07:17.865 ************************************ 00:07:17.865 START TEST unittest_util 00:07:17.865 ************************************ 00:07:17.865 16:46:06 -- common/autotest_common.sh@1114 -- # unittest_util 00:07:17.865 16:46:06 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:07:17.865 00:07:17.865 00:07:17.865 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.865 http://cunit.sourceforge.net/ 00:07:17.865 00:07:17.865 00:07:17.865 Suite: base64 00:07:17.865 Test: test_base64_get_encoded_strlen ...passed 00:07:17.865 Test: test_base64_get_decoded_len ...passed 00:07:17.865 Test: test_base64_encode ...passed 00:07:17.865 Test: test_base64_decode ...passed 00:07:17.865 Test: test_base64_urlsafe_encode ...passed 00:07:17.865 Test: test_base64_urlsafe_decode ...passed 00:07:17.865 00:07:17.865 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.865 suites 1 1 n/a 0 0 00:07:17.865 tests 6 6 6 0 0 00:07:17.865 asserts 112 112 112 0 n/a 00:07:17.865 00:07:17.865 Elapsed time = 0.000 seconds 00:07:17.865 16:46:06 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:07:17.865 00:07:17.865 00:07:17.865 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.865 http://cunit.sourceforge.net/ 00:07:17.865 00:07:17.865 00:07:17.865 Suite: bit_array 00:07:17.865 Test: test_1bit ...passed 00:07:17.865 Test: test_64bit ...passed 00:07:17.865 Test: test_find ...passed 00:07:17.865 Test: test_resize ...passed 00:07:17.865 Test: test_errors ...passed 00:07:17.865 Test: test_count ...passed 00:07:17.865 Test: test_mask_store_load ...passed 00:07:17.865 Test: test_mask_clear ...passed 00:07:17.865 00:07:17.865 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.865 suites 1 1 n/a 0 0 00:07:17.865 tests 8 8 8 0 0 00:07:17.865 asserts 5075 5075 5075 0 n/a 00:07:17.865 00:07:17.865 Elapsed time = 0.002 seconds 00:07:17.865 16:46:06 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:07:17.865 00:07:17.865 00:07:17.865 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.865 http://cunit.sourceforge.net/ 00:07:17.865 00:07:17.865 00:07:17.865 Suite: cpuset 00:07:17.865 Test: test_cpuset ...passed 00:07:17.865 Test: test_cpuset_parse ...[2024-11-05 16:46:06.584785] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:07:17.865 [2024-11-05 16:46:06.585087] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:07:17.865 [2024-11-05 16:46:06.585180] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:07:17.865 [2024-11-05 16:46:06.585546] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:07:17.865 [2024-11-05 16:46:06.585595] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:07:17.865 [2024-11-05 16:46:06.585632] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:07:17.865 [2024-11-05 16:46:06.585658] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:07:17.865 [2024-11-05 16:46:06.585819] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:07:17.865 passed 00:07:17.865 Test: test_cpuset_fmt ...passed 00:07:17.865 00:07:17.865 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.865 suites 1 1 n/a 0 0 00:07:17.865 tests 3 3 3 0 0 00:07:17.865 asserts 65 65 65 0 n/a 00:07:17.865 00:07:17.865 Elapsed time = 0.003 seconds 00:07:17.865 16:46:06 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:07:17.865 00:07:17.865 00:07:17.866 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.866 http://cunit.sourceforge.net/ 00:07:17.866 00:07:17.866 00:07:17.866 Suite: crc16 00:07:17.866 Test: test_crc16_t10dif ...passed 00:07:17.866 Test: test_crc16_t10dif_seed ...passed 00:07:17.866 Test: test_crc16_t10dif_copy ...passed 00:07:17.866 00:07:17.866 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.866 suites 1 1 n/a 0 0 00:07:17.866 tests 3 3 3 0 0 00:07:17.866 asserts 5 5 5 0 n/a 00:07:17.866 00:07:17.866 Elapsed time = 0.000 seconds 00:07:17.866 16:46:06 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:07:17.866 00:07:17.866 00:07:17.866 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.866 http://cunit.sourceforge.net/ 00:07:17.866 00:07:17.866 00:07:17.866 Suite: crc32_ieee 00:07:17.866 Test: test_crc32_ieee ...passed 00:07:17.866 00:07:17.866 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.866 suites 1 1 n/a 0 0 00:07:17.866 tests 1 1 1 0 0 00:07:17.866 asserts 1 1 1 0 n/a 00:07:17.866 00:07:17.866 Elapsed time = 0.000 seconds 00:07:17.866 16:46:06 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:07:17.866 00:07:17.866 00:07:17.866 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.866 http://cunit.sourceforge.net/ 00:07:17.866 00:07:17.866 00:07:17.866 Suite: crc32c 00:07:17.866 Test: test_crc32c ...passed 00:07:17.866 Test: test_crc32c_nvme ...passed 00:07:17.866 00:07:17.866 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.866 suites 1 1 n/a 0 0 00:07:17.866 tests 2 2 2 0 0 00:07:17.866 asserts 16 16 16 0 n/a 00:07:17.866 00:07:17.866 Elapsed time = 0.001 seconds 00:07:17.866 16:46:06 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:07:17.866 00:07:17.866 00:07:17.866 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.866 http://cunit.sourceforge.net/ 00:07:17.866 00:07:17.866 00:07:17.866 Suite: crc64 00:07:17.866 Test: test_crc64_nvme ...passed 00:07:17.866 00:07:17.866 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.866 suites 1 1 n/a 0 0 00:07:17.866 tests 1 1 1 0 0 00:07:17.866 asserts 4 4 4 0 n/a 00:07:17.866 00:07:17.866 Elapsed time = 0.001 seconds 00:07:17.866 16:46:06 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:07:18.128 00:07:18.128 00:07:18.128 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.128 http://cunit.sourceforge.net/ 00:07:18.128 00:07:18.128 00:07:18.128 Suite: string 00:07:18.128 Test: test_parse_ip_addr ...passed 00:07:18.128 Test: test_str_chomp ...passed 00:07:18.128 Test: test_parse_capacity ...passed 00:07:18.128 Test: test_sprintf_append_realloc ...passed 00:07:18.128 Test: test_strtol ...passed 00:07:18.128 Test: test_strtoll ...passed 00:07:18.128 Test: test_strarray ...passed 00:07:18.128 Test: test_strcpy_replace ...passed 00:07:18.128 00:07:18.128 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.128 suites 1 1 n/a 0 0 00:07:18.128 tests 8 8 8 0 0 00:07:18.128 asserts 161 161 161 0 n/a 00:07:18.128 00:07:18.128 Elapsed time = 0.001 seconds 00:07:18.128 16:46:06 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:07:18.128 00:07:18.128 00:07:18.128 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.128 http://cunit.sourceforge.net/ 00:07:18.128 00:07:18.128 00:07:18.128 Suite: dif 00:07:18.128 Test: dif_generate_and_verify_test ...[2024-11-05 16:46:06.747835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:18.128 [2024-11-05 16:46:06.748317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:18.128 [2024-11-05 16:46:06.748616] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:18.128 [2024-11-05 16:46:06.748908] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:18.128 [2024-11-05 16:46:06.749198] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:18.128 [2024-11-05 16:46:06.749497] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:18.128 passed 00:07:18.128 Test: dif_disable_check_test ...[2024-11-05 16:46:06.750529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:18.128 [2024-11-05 16:46:06.750897] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:18.128 [2024-11-05 16:46:06.751194] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:18.128 passed 00:07:18.128 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-11-05 16:46:06.752284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:07:18.128 [2024-11-05 16:46:06.752610] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:07:18.128 [2024-11-05 16:46:06.752937] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:07:18.128 [2024-11-05 16:46:06.753301] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:07:18.128 [2024-11-05 16:46:06.753637] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:18.128 [2024-11-05 16:46:06.753958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:18.129 [2024-11-05 16:46:06.754276] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:18.129 [2024-11-05 16:46:06.754590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:18.129 [2024-11-05 16:46:06.754916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:18.129 [2024-11-05 16:46:06.755251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:18.129 [2024-11-05 16:46:06.755608] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:18.129 passed 00:07:18.129 Test: dif_apptag_mask_test ...[2024-11-05 16:46:06.755955] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:18.129 [2024-11-05 16:46:06.756267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:18.129 passed 00:07:18.129 Test: dif_sec_512_md_0_error_test ...[2024-11-05 16:46:06.756483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:18.129 passed 00:07:18.129 Test: dif_sec_4096_md_0_error_test ...[2024-11-05 16:46:06.756540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:18.129 [2024-11-05 16:46:06.756589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:18.129 passed 00:07:18.129 Test: dif_sec_4100_md_128_error_test ...passed 00:07:18.129 Test: dif_guard_seed_test ...[2024-11-05 16:46:06.756647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:18.129 [2024-11-05 16:46:06.756689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:18.129 passed 00:07:18.129 Test: dif_guard_value_test ...passed 00:07:18.129 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:07:18.129 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:07:18.129 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:18.129 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:18.129 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:18.129 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:07:18.129 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:18.129 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:18.129 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:07:18.129 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:18.129 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:07:18.129 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:07:18.129 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:18.129 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:18.129 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:18.129 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:18.129 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:18.129 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:18.129 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-05 16:46:06.801008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd0c, Actual=fd4c 00:07:18.129 [2024-11-05 16:46:06.803487] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fe61, Actual=fe21 00:07:18.129 [2024-11-05 16:46:06.805938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=c8 00:07:18.129 [2024-11-05 16:46:06.808405] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=c8 00:07:18.129 [2024-11-05 16:46:06.810887] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=1c 00:07:18.129 [2024-11-05 16:46:06.813332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=1c 00:07:18.129 [2024-11-05 16:46:06.815800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4c, Actual=c83b 00:07:18.129 [2024-11-05 16:46:06.817465] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fe21, Actual=a3cb 00:07:18.129 [2024-11-05 16:46:06.819144] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ad, Actual=1ab753ed 00:07:18.129 [2024-11-05 16:46:06.821595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=38574620, Actual=38574660 00:07:18.129 [2024-11-05 16:46:06.824084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=c8 00:07:18.129 [2024-11-05 16:46:06.826536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=c8 00:07:18.129 [2024-11-05 16:46:06.829001] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=400000005c 00:07:18.129 [2024-11-05 16:46:06.831457] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=400000005c 00:07:18.129 [2024-11-05 16:46:06.833915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ed, Actual=ea27deea 00:07:18.129 [2024-11-05 16:46:06.835597] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=38574660, Actual=95c74996 00:07:18.129 [2024-11-05 16:46:06.837289] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7328ecc20d3, Actual=a576a7728ecc20d3 00:07:18.129 [2024-11-05 16:46:06.839748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=88010a6d4837a266, Actual=88010a2d4837a266 00:07:18.129 [2024-11-05 16:46:06.842193] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=c8 00:07:18.129 [2024-11-05 16:46:06.844670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=c8 00:07:18.129 [2024-11-05 16:46:06.847126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=40005c 00:07:18.129 [2024-11-05 16:46:06.849579] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=40005c 00:07:18.129 [2024-11-05 16:46:06.852071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d3, Actual=ca744bbfc4c5bed4 00:07:18.129 [2024-11-05 16:46:06.853750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=88010a2d4837a266, Actual=8dd7fce6c4f1989d 00:07:18.129 passed 00:07:18.129 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-11-05 16:46:06.854525] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:07:18.129 [2024-11-05 16:46:06.854837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:07:18.129 [2024-11-05 16:46:06.855146] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.129 [2024-11-05 16:46:06.855458] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.129 [2024-11-05 16:46:06.855797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:07:18.129 [2024-11-05 16:46:06.856102] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:07:18.129 [2024-11-05 16:46:06.856409] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c83b 00:07:18.129 [2024-11-05 16:46:06.856603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a3cb 00:07:18.129 [2024-11-05 16:46:06.856802] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ad, Actual=1ab753ed 00:07:18.129 [2024-11-05 16:46:06.857096] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574620, Actual=38574660 00:07:18.129 [2024-11-05 16:46:06.857422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.129 [2024-11-05 16:46:06.857728] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.129 [2024-11-05 16:46:06.858037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:18.129 [2024-11-05 16:46:06.858338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:18.129 [2024-11-05 16:46:06.858635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=ea27deea 00:07:18.129 [2024-11-05 16:46:06.858827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=95c74996 00:07:18.129 [2024-11-05 16:46:06.859045] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7328ecc20d3, Actual=a576a7728ecc20d3 00:07:18.129 [2024-11-05 16:46:06.859346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a6d4837a266, Actual=88010a2d4837a266 00:07:18.129 [2024-11-05 16:46:06.859672] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.129 [2024-11-05 16:46:06.859973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.129 [2024-11-05 16:46:06.860275] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:18.129 [2024-11-05 16:46:06.860577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:18.129 [2024-11-05 16:46:06.860900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ca744bbfc4c5bed4 00:07:18.129 [2024-11-05 16:46:06.861106] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=8dd7fce6c4f1989d 00:07:18.129 passed 00:07:18.129 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-11-05 16:46:06.861347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:07:18.129 [2024-11-05 16:46:06.861660] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:07:18.129 [2024-11-05 16:46:06.861968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.129 [2024-11-05 16:46:06.862271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.129 [2024-11-05 16:46:06.862589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:07:18.130 [2024-11-05 16:46:06.862904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:07:18.130 [2024-11-05 16:46:06.863224] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c83b 00:07:18.130 [2024-11-05 16:46:06.863429] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a3cb 00:07:18.130 [2024-11-05 16:46:06.863637] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ad, Actual=1ab753ed 00:07:18.130 [2024-11-05 16:46:06.863948] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574620, Actual=38574660 00:07:18.130 [2024-11-05 16:46:06.864252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.130 [2024-11-05 16:46:06.864561] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.130 [2024-11-05 16:46:06.864877] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:18.130 [2024-11-05 16:46:06.865181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:18.130 [2024-11-05 16:46:06.865491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=ea27deea 00:07:18.130 [2024-11-05 16:46:06.865683] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=95c74996 00:07:18.130 [2024-11-05 16:46:06.865901] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7328ecc20d3, Actual=a576a7728ecc20d3 00:07:18.130 [2024-11-05 16:46:06.866199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a6d4837a266, Actual=88010a2d4837a266 00:07:18.130 [2024-11-05 16:46:06.866529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.130 [2024-11-05 16:46:06.866835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.130 [2024-11-05 16:46:06.867159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:18.130 [2024-11-05 16:46:06.867473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:18.130 [2024-11-05 16:46:06.867800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ca744bbfc4c5bed4 00:07:18.130 [2024-11-05 16:46:06.868010] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=8dd7fce6c4f1989d 00:07:18.130 passed 00:07:18.130 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-11-05 16:46:06.868243] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:07:18.130 [2024-11-05 16:46:06.868561] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:07:18.130 [2024-11-05 16:46:06.868876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.130 [2024-11-05 16:46:06.869180] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.130 [2024-11-05 16:46:06.869511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:07:18.130 [2024-11-05 16:46:06.869810] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:07:18.130 [2024-11-05 16:46:06.870116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c83b 00:07:18.130 [2024-11-05 16:46:06.870319] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a3cb 00:07:18.130 [2024-11-05 16:46:06.870515] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ad, Actual=1ab753ed 00:07:18.130 [2024-11-05 16:46:06.870812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574620, Actual=38574660 00:07:18.130 [2024-11-05 16:46:06.871155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.130 [2024-11-05 16:46:06.871476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.130 [2024-11-05 16:46:06.871779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:18.130 [2024-11-05 16:46:06.872109] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:18.130 [2024-11-05 16:46:06.872422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=ea27deea 00:07:18.130 [2024-11-05 16:46:06.872623] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=95c74996 00:07:18.130 [2024-11-05 16:46:06.872835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7328ecc20d3, Actual=a576a7728ecc20d3 00:07:18.130 [2024-11-05 16:46:06.873144] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a6d4837a266, Actual=88010a2d4837a266 00:07:18.130 [2024-11-05 16:46:06.873447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.130 [2024-11-05 16:46:06.873758] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.130 [2024-11-05 16:46:06.874067] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:18.130 [2024-11-05 16:46:06.874370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:18.130 [2024-11-05 16:46:06.874693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ca744bbfc4c5bed4 00:07:18.130 [2024-11-05 16:46:06.874901] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=8dd7fce6c4f1989d 00:07:18.130 passed 00:07:18.130 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-11-05 16:46:06.875139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:07:18.130 [2024-11-05 16:46:06.875444] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:07:18.130 [2024-11-05 16:46:06.875770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.130 [2024-11-05 16:46:06.876087] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.130 [2024-11-05 16:46:06.876407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:07:18.130 [2024-11-05 16:46:06.876711] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:07:18.130 [2024-11-05 16:46:06.877022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c83b 00:07:18.130 [2024-11-05 16:46:06.877215] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a3cb 00:07:18.130 passed 00:07:18.130 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-11-05 16:46:06.877440] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ad, Actual=1ab753ed 00:07:18.130 [2024-11-05 16:46:06.877744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574620, Actual=38574660 00:07:18.130 [2024-11-05 16:46:06.878068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.130 [2024-11-05 16:46:06.878366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.130 [2024-11-05 16:46:06.878681] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:18.130 [2024-11-05 16:46:06.878998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:18.130 [2024-11-05 16:46:06.879303] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=ea27deea 00:07:18.130 [2024-11-05 16:46:06.879503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=95c74996 00:07:18.130 [2024-11-05 16:46:06.879753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7328ecc20d3, Actual=a576a7728ecc20d3 00:07:18.130 [2024-11-05 16:46:06.880065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a6d4837a266, Actual=88010a2d4837a266 00:07:18.130 [2024-11-05 16:46:06.880368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.130 [2024-11-05 16:46:06.880676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.131 [2024-11-05 16:46:06.880979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:18.131 [2024-11-05 16:46:06.881284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:18.131 [2024-11-05 16:46:06.881600] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ca744bbfc4c5bed4 00:07:18.131 [2024-11-05 16:46:06.881801] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=8dd7fce6c4f1989d 00:07:18.131 passed 00:07:18.131 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-11-05 16:46:06.882034] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:07:18.131 [2024-11-05 16:46:06.882345] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:07:18.131 [2024-11-05 16:46:06.882647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.131 [2024-11-05 16:46:06.882973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.131 [2024-11-05 16:46:06.883306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:07:18.131 [2024-11-05 16:46:06.883618] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:07:18.131 [2024-11-05 16:46:06.883930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c83b 00:07:18.131 [2024-11-05 16:46:06.884126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a3cb 00:07:18.131 passed 00:07:18.131 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-11-05 16:46:06.884360] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ad, Actual=1ab753ed 00:07:18.131 [2024-11-05 16:46:06.884666] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574620, Actual=38574660 00:07:18.131 [2024-11-05 16:46:06.884997] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.131 [2024-11-05 16:46:06.885308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.131 [2024-11-05 16:46:06.885619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:18.131 [2024-11-05 16:46:06.885926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:18.131 [2024-11-05 16:46:06.886228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=ea27deea 00:07:18.131 [2024-11-05 16:46:06.886414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=95c74996 00:07:18.131 [2024-11-05 16:46:06.886651] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7328ecc20d3, Actual=a576a7728ecc20d3 00:07:18.131 [2024-11-05 16:46:06.886965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a6d4837a266, Actual=88010a2d4837a266 00:07:18.131 [2024-11-05 16:46:06.887265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.131 [2024-11-05 16:46:06.887577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.131 [2024-11-05 16:46:06.887881] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:18.131 [2024-11-05 16:46:06.888175] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:18.131 [2024-11-05 16:46:06.888494] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ca744bbfc4c5bed4 00:07:18.131 [2024-11-05 16:46:06.888694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=8dd7fce6c4f1989d 00:07:18.131 passed 00:07:18.131 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:07:18.131 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:18.131 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:18.131 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:18.131 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:18.131 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:18.131 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:18.131 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:18.131 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:18.131 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-05 16:46:06.932622] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd0c, Actual=fd4c 00:07:18.131 [2024-11-05 16:46:06.933745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=ff1b, Actual=ff5b 00:07:18.131 [2024-11-05 16:46:06.934872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=c8 00:07:18.131 [2024-11-05 16:46:06.935990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=c8 00:07:18.131 [2024-11-05 16:46:06.937111] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=1c 00:07:18.131 [2024-11-05 16:46:06.938202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=1c 00:07:18.131 [2024-11-05 16:46:06.939318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4c, Actual=c83b 00:07:18.131 [2024-11-05 16:46:06.940432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=4e97, Actual=137d 00:07:18.131 [2024-11-05 16:46:06.941549] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ad, Actual=1ab753ed 00:07:18.131 [2024-11-05 16:46:06.942662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=4fd2f0d6, Actual=4fd2f096 00:07:18.131 [2024-11-05 16:46:06.943820] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=c8 00:07:18.131 [2024-11-05 16:46:06.944961] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=c8 00:07:18.131 [2024-11-05 16:46:06.946070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=400000005c 00:07:18.131 [2024-11-05 16:46:06.947202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=400000005c 00:07:18.131 [2024-11-05 16:46:06.948335] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ed, Actual=ea27deea 00:07:18.131 [2024-11-05 16:46:06.949447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=2b267559, Actual=86b67aaf 00:07:18.131 [2024-11-05 16:46:06.950559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7328ecc20d3, Actual=a576a7728ecc20d3 00:07:18.131 [2024-11-05 16:46:06.951720] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=31175da2dfcf0789, Actual=31175de2dfcf0789 00:07:18.131 [2024-11-05 16:46:06.952834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=c8 00:07:18.131 [2024-11-05 16:46:06.953962] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=c8 00:07:18.131 [2024-11-05 16:46:06.955084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=40005c 00:07:18.131 [2024-11-05 16:46:06.956210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=40005c 00:07:18.131 [2024-11-05 16:46:06.957312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d3, Actual=ca744bbfc4c5bed4 00:07:18.131 passed 00:07:18.131 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-05 16:46:06.958447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=cd8425887035b4fd, Actual=c852d343fcf38e06 00:07:18.131 [2024-11-05 16:46:06.958783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:07:18.131 [2024-11-05 16:46:06.959071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=9c9a, Actual=9cda 00:07:18.131 [2024-11-05 16:46:06.959342] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.131 [2024-11-05 16:46:06.959640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.131 [2024-11-05 16:46:06.959939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:07:18.131 [2024-11-05 16:46:06.960237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:07:18.131 [2024-11-05 16:46:06.960506] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c83b 00:07:18.131 [2024-11-05 16:46:06.960774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=70fc 00:07:18.131 [2024-11-05 16:46:06.961037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ad, Actual=1ab753ed 00:07:18.131 [2024-11-05 16:46:06.961320] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8e52c523, Actual=8e52c563 00:07:18.131 [2024-11-05 16:46:06.961600] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.131 [2024-11-05 16:46:06.961879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.131 [2024-11-05 16:46:06.962151] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:18.131 [2024-11-05 16:46:06.962423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:18.131 [2024-11-05 16:46:06.962687] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=ea27deea 00:07:18.131 [2024-11-05 16:46:06.962972] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=47364f5a 00:07:18.131 [2024-11-05 16:46:06.963266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7328ecc20d3, Actual=a576a7728ecc20d3 00:07:18.132 [2024-11-05 16:46:06.963544] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c4f55231f92083d6, Actual=c4f55271f92083d6 00:07:18.132 [2024-11-05 16:46:06.963827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.132 [2024-11-05 16:46:06.964098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:18.132 [2024-11-05 16:46:06.964366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:18.132 [2024-11-05 16:46:06.964638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:18.132 [2024-11-05 16:46:06.964928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ca744bbfc4c5bed4 00:07:18.132 [2024-11-05 16:46:06.965203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=3db0dcd0da1c0a59 00:07:18.132 passed 00:07:18.132 Test: dix_sec_512_md_0_error ...passed 00:07:18.132 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-11-05 16:46:06.965270] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:18.132 passed 00:07:18.132 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:18.132 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:18.132 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:18.132 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:18.132 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:18.132 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:18.132 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:18.132 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:18.132 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-05 16:46:07.008967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f54c, Actual=fd4c 00:07:18.132 [2024-11-05 16:46:07.010134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bad0, Actual=b2d0 00:07:18.132 [2024-11-05 16:46:07.011291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:07:18.391 [2024-11-05 16:46:07.012425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:07:18.391 [2024-11-05 16:46:07.013624] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=800005d 00:07:18.391 [2024-11-05 16:46:07.014775] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=800005d 00:07:18.391 [2024-11-05 16:46:07.015925] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=9084 00:07:18.391 [2024-11-05 16:46:07.017100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=8289 00:07:18.391 [2024-11-05 16:46:07.018226] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=12b753ed, Actual=1ab753ed 00:07:18.391 [2024-11-05 16:46:07.019362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=119f2e9, Actual=919f2e9 00:07:18.391 [2024-11-05 16:46:07.020549] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:07:18.391 [2024-11-05 16:46:07.021687] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:07:18.391 [2024-11-05 16:46:07.022843] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80000000000005d 00:07:18.391 [2024-11-05 16:46:07.024019] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80000000000005d 00:07:18.391 [2024-11-05 16:46:07.025149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=bc825051 00:07:18.391 [2024-11-05 16:46:07.026314] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=7cb7622a 00:07:18.391 [2024-11-05 16:46:07.027501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=ad76a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:18.391 [2024-11-05 16:46:07.028611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=8b02d0d9d06b56a2, Actual=8302d0d9d06b56a2 00:07:18.391 [2024-11-05 16:46:07.029778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:07:18.391 [2024-11-05 16:46:07.030957] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:07:18.391 [2024-11-05 16:46:07.032098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=85d 00:07:18.391 [2024-11-05 16:46:07.033220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=85d 00:07:18.391 [2024-11-05 16:46:07.034348] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=4d6cd90282c85a2b 00:07:18.391 passed 00:07:18.391 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-05 16:46:07.035463] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=f8c7d62bba07610d 00:07:18.391 [2024-11-05 16:46:07.035841] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=f54c, Actual=fd4c 00:07:18.391 [2024-11-05 16:46:07.036111] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=d951, Actual=d151 00:07:18.391 [2024-11-05 16:46:07.036389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:18.391 [2024-11-05 16:46:07.036671] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:18.391 [2024-11-05 16:46:07.036967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000059 00:07:18.391 [2024-11-05 16:46:07.037247] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000059 00:07:18.391 [2024-11-05 16:46:07.037525] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=9084 00:07:18.391 [2024-11-05 16:46:07.037805] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=e108 00:07:18.391 [2024-11-05 16:46:07.038077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=12b753ed, Actual=1ab753ed 00:07:18.391 [2024-11-05 16:46:07.038352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=c099c71c, Actual=c899c71c 00:07:18.391 [2024-11-05 16:46:07.038639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:18.391 [2024-11-05 16:46:07.038933] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:18.391 [2024-11-05 16:46:07.039198] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000000000059 00:07:18.391 [2024-11-05 16:46:07.039487] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000000000059 00:07:18.391 [2024-11-05 16:46:07.039756] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=bc825051 00:07:18.391 [2024-11-05 16:46:07.040030] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=bd3757df 00:07:18.391 [2024-11-05 16:46:07.040317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=ad76a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:18.391 [2024-11-05 16:46:07.040598] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7ee0df4af684d2fd, Actual=76e0df4af684d2fd 00:07:18.391 [2024-11-05 16:46:07.040867] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:18.391 [2024-11-05 16:46:07.041156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:07:18.392 [2024-11-05 16:46:07.041424] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:07:18.392 [2024-11-05 16:46:07.041698] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:07:18.392 [2024-11-05 16:46:07.041978] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=4d6cd90282c85a2b 00:07:18.392 [2024-11-05 16:46:07.042258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=d25d9b89ce8e552 00:07:18.392 passed 00:07:18.392 Test: set_md_interleave_iovs_test ...passed 00:07:18.392 Test: set_md_interleave_iovs_split_test ...passed 00:07:18.392 Test: dif_generate_stream_pi_16_test ...passed 00:07:18.392 Test: dif_generate_stream_test ...passed 00:07:18.392 Test: set_md_interleave_iovs_alignment_test ...[2024-11-05 16:46:07.049694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:07:18.392 passed 00:07:18.392 Test: dif_generate_split_test ...passed 00:07:18.392 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:07:18.392 Test: dif_verify_split_test ...passed 00:07:18.392 Test: dif_verify_stream_multi_segments_test ...passed 00:07:18.392 Test: update_crc32c_pi_16_test ...passed 00:07:18.392 Test: update_crc32c_test ...passed 00:07:18.392 Test: dif_update_crc32c_split_test ...passed 00:07:18.392 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:07:18.392 Test: get_range_with_md_test ...passed 00:07:18.392 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:07:18.392 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:07:18.392 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:18.392 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:07:18.392 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:07:18.392 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:18.392 Test: dif_generate_and_verify_unmap_test ...passed 00:07:18.392 00:07:18.392 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.392 suites 1 1 n/a 0 0 00:07:18.392 tests 79 79 79 0 0 00:07:18.392 asserts 3584 3584 3584 0 n/a 00:07:18.392 00:07:18.392 Elapsed time = 0.348 seconds 00:07:18.392 16:46:07 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:07:18.392 00:07:18.392 00:07:18.392 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.392 http://cunit.sourceforge.net/ 00:07:18.392 00:07:18.392 00:07:18.392 Suite: iov 00:07:18.392 Test: test_single_iov ...passed 00:07:18.392 Test: test_simple_iov ...passed 00:07:18.392 Test: test_complex_iov ...passed 00:07:18.392 Test: test_iovs_to_buf ...passed 00:07:18.392 Test: test_buf_to_iovs ...passed 00:07:18.392 Test: test_memset ...passed 00:07:18.392 Test: test_iov_one ...passed 00:07:18.392 Test: test_iov_xfer ...passed 00:07:18.392 00:07:18.392 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.392 suites 1 1 n/a 0 0 00:07:18.392 tests 8 8 8 0 0 00:07:18.392 asserts 156 156 156 0 n/a 00:07:18.392 00:07:18.392 Elapsed time = 0.000 seconds 00:07:18.392 16:46:07 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:07:18.392 00:07:18.392 00:07:18.392 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.392 http://cunit.sourceforge.net/ 00:07:18.392 00:07:18.392 00:07:18.392 Suite: math 00:07:18.392 Test: test_serial_number_arithmetic ...passed 00:07:18.392 Suite: erase 00:07:18.392 Test: test_memset_s ...passed 00:07:18.392 00:07:18.392 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.392 suites 2 2 n/a 0 0 00:07:18.392 tests 2 2 2 0 0 00:07:18.392 asserts 18 18 18 0 n/a 00:07:18.392 00:07:18.392 Elapsed time = 0.000 seconds 00:07:18.392 16:46:07 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:07:18.392 00:07:18.392 00:07:18.392 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.392 http://cunit.sourceforge.net/ 00:07:18.392 00:07:18.392 00:07:18.392 Suite: pipe 00:07:18.392 Test: test_create_destroy ...passed 00:07:18.392 Test: test_write_get_buffer ...passed 00:07:18.392 Test: test_write_advance ...passed 00:07:18.392 Test: test_read_get_buffer ...passed 00:07:18.392 Test: test_read_advance ...passed 00:07:18.392 Test: test_data ...passed 00:07:18.392 00:07:18.392 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.392 suites 1 1 n/a 0 0 00:07:18.392 tests 6 6 6 0 0 00:07:18.392 asserts 250 250 250 0 n/a 00:07:18.392 00:07:18.392 Elapsed time = 0.000 seconds 00:07:18.392 16:46:07 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:07:18.392 00:07:18.392 00:07:18.392 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.392 http://cunit.sourceforge.net/ 00:07:18.392 00:07:18.392 00:07:18.392 Suite: xor 00:07:18.392 Test: test_xor_gen ...passed 00:07:18.392 00:07:18.392 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.392 suites 1 1 n/a 0 0 00:07:18.392 tests 1 1 1 0 0 00:07:18.392 asserts 17 17 17 0 n/a 00:07:18.392 00:07:18.392 Elapsed time = 0.007 seconds 00:07:18.392 00:07:18.392 real 0m0.717s 00:07:18.392 user 0m0.515s 00:07:18.392 sys 0m0.204s 00:07:18.392 16:46:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.392 16:46:07 -- common/autotest_common.sh@10 -- # set +x 00:07:18.392 ************************************ 00:07:18.392 END TEST unittest_util 00:07:18.392 ************************************ 00:07:18.392 16:46:07 -- unit/unittest.sh@258 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:18.392 16:46:07 -- unit/unittest.sh@259 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:18.392 16:46:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:18.392 16:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.392 16:46:07 -- common/autotest_common.sh@10 -- # set +x 00:07:18.651 ************************************ 00:07:18.651 START TEST unittest_vhost 00:07:18.651 ************************************ 00:07:18.651 16:46:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:18.651 00:07:18.651 00:07:18.651 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.651 http://cunit.sourceforge.net/ 00:07:18.651 00:07:18.651 00:07:18.651 Suite: vhost_suite 00:07:18.651 Test: desc_to_iov_test ...[2024-11-05 16:46:07.301140] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:07:18.651 passed 00:07:18.651 Test: create_controller_test ...[2024-11-05 16:46:07.305650] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:18.651 [2024-11-05 16:46:07.305903] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:07:18.651 [2024-11-05 16:46:07.306137] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:18.651 [2024-11-05 16:46:07.306347] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:07:18.651 [2024-11-05 16:46:07.306530] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:07:18.651 [2024-11-05 16:46:07.306763] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-11-05 16:46:07.307853] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:07:18.651 passed 00:07:18.651 Test: session_find_by_vid_test ...passed 00:07:18.651 Test: remove_controller_test ...[2024-11-05 16:46:07.310312] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:07:18.651 passed 00:07:18.651 Test: vq_avail_ring_get_test ...passed 00:07:18.651 Test: vq_packed_ring_test ...passed 00:07:18.651 Test: vhost_blk_construct_test ...passed 00:07:18.652 00:07:18.652 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.652 suites 1 1 n/a 0 0 00:07:18.652 tests 7 7 7 0 0 00:07:18.652 asserts 145 145 145 0 n/a 00:07:18.652 00:07:18.652 Elapsed time = 0.012 seconds 00:07:18.652 00:07:18.652 real 0m0.050s 00:07:18.652 user 0m0.038s 00:07:18.652 sys 0m0.010s 00:07:18.652 16:46:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.652 16:46:07 -- common/autotest_common.sh@10 -- # set +x 00:07:18.652 ************************************ 00:07:18.652 END TEST unittest_vhost 00:07:18.652 ************************************ 00:07:18.652 16:46:07 -- unit/unittest.sh@261 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:18.652 16:46:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:18.652 16:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.652 16:46:07 -- common/autotest_common.sh@10 -- # set +x 00:07:18.652 ************************************ 00:07:18.652 START TEST unittest_dma 00:07:18.652 ************************************ 00:07:18.652 16:46:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:18.652 00:07:18.652 00:07:18.652 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.652 http://cunit.sourceforge.net/ 00:07:18.652 00:07:18.652 00:07:18.652 Suite: dma_suite 00:07:18.652 Test: test_dma ...[2024-11-05 16:46:07.396956] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:07:18.652 passed 00:07:18.652 00:07:18.652 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.652 suites 1 1 n/a 0 0 00:07:18.652 tests 1 1 1 0 0 00:07:18.652 asserts 50 50 50 0 n/a 00:07:18.652 00:07:18.652 Elapsed time = 0.001 seconds 00:07:18.652 00:07:18.652 real 0m0.024s 00:07:18.652 user 0m0.017s 00:07:18.652 sys 0m0.008s 00:07:18.652 16:46:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.652 16:46:07 -- common/autotest_common.sh@10 -- # set +x 00:07:18.652 ************************************ 00:07:18.652 END TEST unittest_dma 00:07:18.652 ************************************ 00:07:18.652 16:46:07 -- unit/unittest.sh@263 -- # run_test unittest_init unittest_init 00:07:18.652 16:46:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:18.652 16:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.652 16:46:07 -- common/autotest_common.sh@10 -- # set +x 00:07:18.652 ************************************ 00:07:18.652 START TEST unittest_init 00:07:18.652 ************************************ 00:07:18.652 16:46:07 -- common/autotest_common.sh@1114 -- # unittest_init 00:07:18.652 16:46:07 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:07:18.652 00:07:18.652 00:07:18.652 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.652 http://cunit.sourceforge.net/ 00:07:18.652 00:07:18.652 00:07:18.652 Suite: subsystem_suite 00:07:18.652 Test: subsystem_sort_test_depends_on_single ...passed 00:07:18.652 Test: subsystem_sort_test_depends_on_multiple ...passed 00:07:18.652 Test: subsystem_sort_test_missing_dependency ...[2024-11-05 16:46:07.475454] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:07:18.652 passed 00:07:18.652 00:07:18.652 [2024-11-05 16:46:07.475751] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:07:18.652 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.652 suites 1 1 n/a 0 0 00:07:18.652 tests 3 3 3 0 0 00:07:18.652 asserts 20 20 20 0 n/a 00:07:18.652 00:07:18.652 Elapsed time = 0.000 seconds 00:07:18.652 00:07:18.652 real 0m0.032s 00:07:18.652 user 0m0.017s 00:07:18.652 sys 0m0.016s 00:07:18.652 16:46:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.652 16:46:07 -- common/autotest_common.sh@10 -- # set +x 00:07:18.652 ************************************ 00:07:18.652 END TEST unittest_init 00:07:18.652 ************************************ 00:07:18.652 16:46:07 -- unit/unittest.sh@265 -- # [[ y == y ]] 00:07:18.652 16:46:07 -- unit/unittest.sh@266 -- # hostname 00:07:18.652 16:46:07 -- unit/unittest.sh@266 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -d . -c --no-external -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:18.910 geninfo: WARNING: invalid characters removed from testname! 00:07:45.451 16:46:30 -- unit/unittest.sh@267 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:07:45.451 16:46:34 -- unit/unittest.sh@268 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:48.015 16:46:36 -- unit/unittest.sh@269 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:51.301 16:46:39 -- unit/unittest.sh@270 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:53.204 16:46:42 -- unit/unittest.sh@271 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:55.738 16:46:44 -- unit/unittest.sh@272 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:58.270 16:46:46 -- unit/unittest.sh@273 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:58.270 16:46:46 -- unit/unittest.sh@274 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:58.529 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:58.529 Found 309 entries. 00:07:58.529 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:07:58.529 Writing .css and .png files. 00:07:58.529 Generating output. 00:07:58.529 Processing file include/linux/virtio_ring.h 00:07:58.842 Processing file include/spdk/nvme_spec.h 00:07:58.842 Processing file include/spdk/bdev_module.h 00:07:58.842 Processing file include/spdk/util.h 00:07:58.842 Processing file include/spdk/base64.h 00:07:58.842 Processing file include/spdk/thread.h 00:07:58.842 Processing file include/spdk/mmio.h 00:07:58.842 Processing file include/spdk/histogram_data.h 00:07:58.842 Processing file include/spdk/nvme.h 00:07:58.842 Processing file include/spdk/endian.h 00:07:58.842 Processing file include/spdk/trace.h 00:07:58.842 Processing file include/spdk/nvmf_transport.h 00:07:58.842 Processing file include/spdk_internal/virtio.h 00:07:58.842 Processing file include/spdk_internal/nvme_tcp.h 00:07:58.842 Processing file include/spdk_internal/sock.h 00:07:58.842 Processing file include/spdk_internal/utf.h 00:07:58.842 Processing file include/spdk_internal/sgl.h 00:07:58.842 Processing file include/spdk_internal/rdma.h 00:07:59.134 Processing file lib/accel/accel_rpc.c 00:07:59.134 Processing file lib/accel/accel_sw.c 00:07:59.134 Processing file lib/accel/accel.c 00:07:59.134 Processing file lib/bdev/scsi_nvme.c 00:07:59.134 Processing file lib/bdev/bdev.c 00:07:59.134 Processing file lib/bdev/part.c 00:07:59.134 Processing file lib/bdev/bdev_zone.c 00:07:59.134 Processing file lib/bdev/bdev_rpc.c 00:07:59.460 Processing file lib/blob/blobstore.h 00:07:59.460 Processing file lib/blob/blob_bs_dev.c 00:07:59.460 Processing file lib/blob/blobstore.c 00:07:59.460 Processing file lib/blob/zeroes.c 00:07:59.460 Processing file lib/blob/request.c 00:07:59.460 Processing file lib/blobfs/tree.c 00:07:59.460 Processing file lib/blobfs/blobfs.c 00:07:59.460 Processing file lib/conf/conf.c 00:07:59.719 Processing file lib/dma/dma.c 00:07:59.719 Processing file lib/env_dpdk/pci_vmd.c 00:07:59.719 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:07:59.719 Processing file lib/env_dpdk/pci_virtio.c 00:07:59.719 Processing file lib/env_dpdk/pci_ioat.c 00:07:59.719 Processing file lib/env_dpdk/pci_event.c 00:07:59.719 Processing file lib/env_dpdk/init.c 00:07:59.719 Processing file lib/env_dpdk/env.c 00:07:59.719 Processing file lib/env_dpdk/pci.c 00:07:59.719 Processing file lib/env_dpdk/pci_idxd.c 00:07:59.719 Processing file lib/env_dpdk/memory.c 00:07:59.719 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:07:59.719 Processing file lib/env_dpdk/sigbus_handler.c 00:07:59.719 Processing file lib/env_dpdk/threads.c 00:07:59.719 Processing file lib/env_dpdk/pci_dpdk.c 00:07:59.978 Processing file lib/event/reactor.c 00:07:59.978 Processing file lib/event/app_rpc.c 00:07:59.978 Processing file lib/event/scheduler_static.c 00:07:59.978 Processing file lib/event/log_rpc.c 00:07:59.978 Processing file lib/event/app.c 00:08:00.236 Processing file lib/ftl/ftl_rq.c 00:08:00.236 Processing file lib/ftl/ftl_core.h 00:08:00.236 Processing file lib/ftl/ftl_io.h 00:08:00.236 Processing file lib/ftl/ftl_nv_cache.c 00:08:00.236 Processing file lib/ftl/ftl_nv_cache_io.h 00:08:00.236 Processing file lib/ftl/ftl_trace.c 00:08:00.236 Processing file lib/ftl/ftl_band.h 00:08:00.236 Processing file lib/ftl/ftl_core.c 00:08:00.236 Processing file lib/ftl/ftl_nv_cache.h 00:08:00.236 Processing file lib/ftl/ftl_reloc.c 00:08:00.236 Processing file lib/ftl/ftl_l2p_cache.c 00:08:00.236 Processing file lib/ftl/ftl_writer.h 00:08:00.236 Processing file lib/ftl/ftl_l2p.c 00:08:00.236 Processing file lib/ftl/ftl_layout.c 00:08:00.236 Processing file lib/ftl/ftl_io.c 00:08:00.236 Processing file lib/ftl/ftl_writer.c 00:08:00.236 Processing file lib/ftl/ftl_l2p_flat.c 00:08:00.236 Processing file lib/ftl/ftl_sb.c 00:08:00.236 Processing file lib/ftl/ftl_band_ops.c 00:08:00.236 Processing file lib/ftl/ftl_band.c 00:08:00.236 Processing file lib/ftl/ftl_debug.c 00:08:00.236 Processing file lib/ftl/ftl_debug.h 00:08:00.236 Processing file lib/ftl/ftl_p2l.c 00:08:00.236 Processing file lib/ftl/ftl_init.c 00:08:00.495 Processing file lib/ftl/base/ftl_base_bdev.c 00:08:00.495 Processing file lib/ftl/base/ftl_base_dev.c 00:08:00.753 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:08:00.753 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:08:00.753 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:08:00.753 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:08:00.753 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:08:00.753 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:08:00.753 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:08:00.753 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:08:00.753 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:08:00.753 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:08:00.753 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:08:00.753 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:08:00.753 Processing file lib/ftl/mngt/ftl_mngt.c 00:08:00.753 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:08:00.753 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:08:00.753 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:08:00.753 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:08:00.753 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:08:00.753 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:08:01.011 Processing file lib/ftl/utils/ftl_md.c 00:08:01.011 Processing file lib/ftl/utils/ftl_property.c 00:08:01.011 Processing file lib/ftl/utils/ftl_mempool.c 00:08:01.011 Processing file lib/ftl/utils/ftl_bitmap.c 00:08:01.011 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:08:01.011 Processing file lib/ftl/utils/ftl_conf.c 00:08:01.011 Processing file lib/ftl/utils/ftl_df.h 00:08:01.011 Processing file lib/ftl/utils/ftl_property.h 00:08:01.011 Processing file lib/ftl/utils/ftl_addr_utils.h 00:08:01.011 Processing file lib/idxd/idxd_internal.h 00:08:01.011 Processing file lib/idxd/idxd.c 00:08:01.011 Processing file lib/idxd/idxd_user.c 00:08:01.269 Processing file lib/init/rpc.c 00:08:01.269 Processing file lib/init/json_config.c 00:08:01.269 Processing file lib/init/subsystem_rpc.c 00:08:01.269 Processing file lib/init/subsystem.c 00:08:01.269 Processing file lib/ioat/ioat_internal.h 00:08:01.269 Processing file lib/ioat/ioat.c 00:08:01.528 Processing file lib/iscsi/portal_grp.c 00:08:01.528 Processing file lib/iscsi/param.c 00:08:01.528 Processing file lib/iscsi/iscsi.c 00:08:01.528 Processing file lib/iscsi/task.c 00:08:01.528 Processing file lib/iscsi/iscsi_rpc.c 00:08:01.528 Processing file lib/iscsi/iscsi.h 00:08:01.528 Processing file lib/iscsi/tgt_node.c 00:08:01.528 Processing file lib/iscsi/md5.c 00:08:01.528 Processing file lib/iscsi/init_grp.c 00:08:01.528 Processing file lib/iscsi/task.h 00:08:01.528 Processing file lib/iscsi/conn.c 00:08:01.528 Processing file lib/iscsi/iscsi_subsystem.c 00:08:01.787 Processing file lib/json/json_parse.c 00:08:01.788 Processing file lib/json/json_util.c 00:08:01.788 Processing file lib/json/json_write.c 00:08:01.788 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:08:01.788 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:08:01.788 Processing file lib/jsonrpc/jsonrpc_client.c 00:08:01.788 Processing file lib/jsonrpc/jsonrpc_server.c 00:08:02.047 Processing file lib/log/log.c 00:08:02.047 Processing file lib/log/log_flags.c 00:08:02.047 Processing file lib/log/log_deprecated.c 00:08:02.047 Processing file lib/lvol/lvol.c 00:08:02.047 Processing file lib/nbd/nbd_rpc.c 00:08:02.047 Processing file lib/nbd/nbd.c 00:08:02.306 Processing file lib/notify/notify_rpc.c 00:08:02.306 Processing file lib/notify/notify.c 00:08:02.874 Processing file lib/nvme/nvme_fabric.c 00:08:02.874 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:08:02.874 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:08:02.874 Processing file lib/nvme/nvme_io_msg.c 00:08:02.874 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:08:02.874 Processing file lib/nvme/nvme_transport.c 00:08:02.874 Processing file lib/nvme/nvme_rdma.c 00:08:02.874 Processing file lib/nvme/nvme_pcie.c 00:08:02.875 Processing file lib/nvme/nvme_vfio_user.c 00:08:02.875 Processing file lib/nvme/nvme_opal.c 00:08:02.875 Processing file lib/nvme/nvme_tcp.c 00:08:02.875 Processing file lib/nvme/nvme_cuse.c 00:08:02.875 Processing file lib/nvme/nvme_discovery.c 00:08:02.875 Processing file lib/nvme/nvme_pcie_internal.h 00:08:02.875 Processing file lib/nvme/nvme_ctrlr.c 00:08:02.875 Processing file lib/nvme/nvme_qpair.c 00:08:02.875 Processing file lib/nvme/nvme_poll_group.c 00:08:02.875 Processing file lib/nvme/nvme_internal.h 00:08:02.875 Processing file lib/nvme/nvme_zns.c 00:08:02.875 Processing file lib/nvme/nvme_pcie_common.c 00:08:02.875 Processing file lib/nvme/nvme_ns_cmd.c 00:08:02.875 Processing file lib/nvme/nvme.c 00:08:02.875 Processing file lib/nvme/nvme_ns.c 00:08:02.875 Processing file lib/nvme/nvme_quirks.c 00:08:03.442 Processing file lib/nvmf/rdma.c 00:08:03.442 Processing file lib/nvmf/nvmf_internal.h 00:08:03.442 Processing file lib/nvmf/ctrlr_discovery.c 00:08:03.442 Processing file lib/nvmf/nvmf_rpc.c 00:08:03.442 Processing file lib/nvmf/nvmf.c 00:08:03.442 Processing file lib/nvmf/transport.c 00:08:03.442 Processing file lib/nvmf/subsystem.c 00:08:03.442 Processing file lib/nvmf/ctrlr_bdev.c 00:08:03.442 Processing file lib/nvmf/tcp.c 00:08:03.443 Processing file lib/nvmf/ctrlr.c 00:08:03.443 Processing file lib/rdma/common.c 00:08:03.443 Processing file lib/rdma/rdma_verbs.c 00:08:03.443 Processing file lib/rpc/rpc.c 00:08:03.701 Processing file lib/scsi/scsi_rpc.c 00:08:03.701 Processing file lib/scsi/dev.c 00:08:03.701 Processing file lib/scsi/scsi.c 00:08:03.701 Processing file lib/scsi/lun.c 00:08:03.701 Processing file lib/scsi/task.c 00:08:03.701 Processing file lib/scsi/scsi_pr.c 00:08:03.701 Processing file lib/scsi/scsi_bdev.c 00:08:03.701 Processing file lib/scsi/port.c 00:08:03.701 Processing file lib/sock/sock.c 00:08:03.701 Processing file lib/sock/sock_rpc.c 00:08:03.961 Processing file lib/thread/thread.c 00:08:03.961 Processing file lib/thread/iobuf.c 00:08:03.961 Processing file lib/trace/trace.c 00:08:03.961 Processing file lib/trace/trace_rpc.c 00:08:03.961 Processing file lib/trace/trace_flags.c 00:08:03.961 Processing file lib/trace_parser/trace.cpp 00:08:04.219 Processing file lib/ut/ut.c 00:08:04.219 Processing file lib/ut_mock/mock.c 00:08:04.480 Processing file lib/util/uuid.c 00:08:04.481 Processing file lib/util/dif.c 00:08:04.481 Processing file lib/util/crc16.c 00:08:04.481 Processing file lib/util/crc32_ieee.c 00:08:04.481 Processing file lib/util/base64.c 00:08:04.481 Processing file lib/util/crc32.c 00:08:04.481 Processing file lib/util/iov.c 00:08:04.481 Processing file lib/util/crc32c.c 00:08:04.481 Processing file lib/util/hexlify.c 00:08:04.481 Processing file lib/util/pipe.c 00:08:04.481 Processing file lib/util/xor.c 00:08:04.481 Processing file lib/util/math.c 00:08:04.481 Processing file lib/util/fd_group.c 00:08:04.481 Processing file lib/util/fd.c 00:08:04.481 Processing file lib/util/zipf.c 00:08:04.481 Processing file lib/util/bit_array.c 00:08:04.481 Processing file lib/util/string.c 00:08:04.481 Processing file lib/util/crc64.c 00:08:04.481 Processing file lib/util/file.c 00:08:04.481 Processing file lib/util/strerror_tls.c 00:08:04.481 Processing file lib/util/cpuset.c 00:08:04.481 Processing file lib/vfio_user/host/vfio_user_pci.c 00:08:04.481 Processing file lib/vfio_user/host/vfio_user.c 00:08:04.751 Processing file lib/vhost/vhost.c 00:08:04.751 Processing file lib/vhost/vhost_rpc.c 00:08:04.751 Processing file lib/vhost/vhost_blk.c 00:08:04.751 Processing file lib/vhost/rte_vhost_user.c 00:08:04.751 Processing file lib/vhost/vhost_scsi.c 00:08:04.751 Processing file lib/vhost/vhost_internal.h 00:08:05.018 Processing file lib/virtio/virtio_vhost_user.c 00:08:05.018 Processing file lib/virtio/virtio_pci.c 00:08:05.018 Processing file lib/virtio/virtio_vfio_user.c 00:08:05.018 Processing file lib/virtio/virtio.c 00:08:05.018 Processing file lib/vmd/vmd.c 00:08:05.018 Processing file lib/vmd/led.c 00:08:05.018 Processing file module/accel/dsa/accel_dsa.c 00:08:05.018 Processing file module/accel/dsa/accel_dsa_rpc.c 00:08:05.276 Processing file module/accel/error/accel_error_rpc.c 00:08:05.276 Processing file module/accel/error/accel_error.c 00:08:05.276 Processing file module/accel/iaa/accel_iaa.c 00:08:05.276 Processing file module/accel/iaa/accel_iaa_rpc.c 00:08:05.276 Processing file module/accel/ioat/accel_ioat_rpc.c 00:08:05.276 Processing file module/accel/ioat/accel_ioat.c 00:08:05.536 Processing file module/bdev/aio/bdev_aio.c 00:08:05.536 Processing file module/bdev/aio/bdev_aio_rpc.c 00:08:05.536 Processing file module/bdev/delay/vbdev_delay.c 00:08:05.536 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:08:05.536 Processing file module/bdev/error/vbdev_error_rpc.c 00:08:05.536 Processing file module/bdev/error/vbdev_error.c 00:08:05.796 Processing file module/bdev/ftl/bdev_ftl.c 00:08:05.796 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:08:05.796 Processing file module/bdev/gpt/gpt.h 00:08:05.796 Processing file module/bdev/gpt/gpt.c 00:08:05.796 Processing file module/bdev/gpt/vbdev_gpt.c 00:08:05.796 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:08:05.796 Processing file module/bdev/iscsi/bdev_iscsi.c 00:08:06.054 Processing file module/bdev/lvol/vbdev_lvol.c 00:08:06.054 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:08:06.054 Processing file module/bdev/malloc/bdev_malloc.c 00:08:06.054 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:08:06.054 Processing file module/bdev/null/bdev_null.c 00:08:06.054 Processing file module/bdev/null/bdev_null_rpc.c 00:08:06.313 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:08:06.313 Processing file module/bdev/nvme/bdev_mdns_client.c 00:08:06.314 Processing file module/bdev/nvme/vbdev_opal.c 00:08:06.314 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:08:06.314 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:08:06.314 Processing file module/bdev/nvme/bdev_nvme.c 00:08:06.314 Processing file module/bdev/nvme/nvme_rpc.c 00:08:06.572 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:08:06.572 Processing file module/bdev/passthru/vbdev_passthru.c 00:08:06.832 Processing file module/bdev/raid/raid1.c 00:08:06.832 Processing file module/bdev/raid/bdev_raid_sb.c 00:08:06.832 Processing file module/bdev/raid/bdev_raid.h 00:08:06.832 Processing file module/bdev/raid/raid5f.c 00:08:06.832 Processing file module/bdev/raid/bdev_raid_rpc.c 00:08:06.832 Processing file module/bdev/raid/concat.c 00:08:06.832 Processing file module/bdev/raid/raid0.c 00:08:06.832 Processing file module/bdev/raid/bdev_raid.c 00:08:06.832 Processing file module/bdev/split/vbdev_split.c 00:08:06.832 Processing file module/bdev/split/vbdev_split_rpc.c 00:08:06.832 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:08:06.832 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:08:06.832 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:08:07.091 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:08:07.091 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:08:07.091 Processing file module/blob/bdev/blob_bdev.c 00:08:07.091 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:08:07.091 Processing file module/blobfs/bdev/blobfs_bdev.c 00:08:07.091 Processing file module/env_dpdk/env_dpdk_rpc.c 00:08:07.350 Processing file module/event/subsystems/accel/accel.c 00:08:07.350 Processing file module/event/subsystems/bdev/bdev.c 00:08:07.350 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:08:07.350 Processing file module/event/subsystems/iobuf/iobuf.c 00:08:07.350 Processing file module/event/subsystems/iscsi/iscsi.c 00:08:07.350 Processing file module/event/subsystems/nbd/nbd.c 00:08:07.609 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:08:07.609 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:08:07.609 Processing file module/event/subsystems/scheduler/scheduler.c 00:08:07.609 Processing file module/event/subsystems/scsi/scsi.c 00:08:07.609 Processing file module/event/subsystems/sock/sock.c 00:08:07.867 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:08:07.868 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:08:07.868 Processing file module/event/subsystems/vmd/vmd.c 00:08:07.868 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:08:07.868 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:08:08.126 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:08:08.126 Processing file module/scheduler/gscheduler/gscheduler.c 00:08:08.126 Processing file module/sock/sock_kernel.h 00:08:08.126 Processing file module/sock/posix/posix.c 00:08:08.126 Writing directory view page. 00:08:08.126 Overall coverage rate: 00:08:08.126 lines......: 39.1% (39266 of 100435 lines) 00:08:08.126 functions..: 42.8% (3587 of 8384 functions) 00:08:08.126 00:08:08.126 00:08:08.126 ===================== 00:08:08.126 All unit tests passed 00:08:08.126 ===================== 00:08:08.126 WARN: lcov not installed or SPDK built without coverage! 00:08:08.126 16:46:56 -- unit/unittest.sh@277 -- # set +x 00:08:08.126 00:08:08.126 00:08:08.126 00:08:08.126 real 2m55.418s 00:08:08.126 user 2m31.427s 00:08:08.126 sys 0m14.496s 00:08:08.126 16:46:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.126 ************************************ 00:08:08.126 END TEST unittest 00:08:08.126 ************************************ 00:08:08.126 16:46:56 -- common/autotest_common.sh@10 -- # set +x 00:08:08.386 16:46:57 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:08:08.386 16:46:57 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:08:08.386 16:46:57 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:08:08.386 16:46:57 -- spdk/autotest.sh@160 -- # timing_enter lib 00:08:08.386 16:46:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:08.386 16:46:57 -- common/autotest_common.sh@10 -- # set +x 00:08:08.386 16:46:57 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:08.386 16:46:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:08.386 16:46:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.386 16:46:57 -- common/autotest_common.sh@10 -- # set +x 00:08:08.386 ************************************ 00:08:08.386 START TEST env 00:08:08.386 ************************************ 00:08:08.386 16:46:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:08.386 * Looking for test storage... 00:08:08.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:08.386 16:46:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:08.386 16:46:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:08.386 16:46:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:08.386 16:46:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:08.386 16:46:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:08.386 16:46:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:08.386 16:46:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:08.386 16:46:57 -- scripts/common.sh@335 -- # IFS=.-: 00:08:08.386 16:46:57 -- scripts/common.sh@335 -- # read -ra ver1 00:08:08.386 16:46:57 -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.386 16:46:57 -- scripts/common.sh@336 -- # read -ra ver2 00:08:08.386 16:46:57 -- scripts/common.sh@337 -- # local 'op=<' 00:08:08.386 16:46:57 -- scripts/common.sh@339 -- # ver1_l=2 00:08:08.386 16:46:57 -- scripts/common.sh@340 -- # ver2_l=1 00:08:08.386 16:46:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:08.386 16:46:57 -- scripts/common.sh@343 -- # case "$op" in 00:08:08.386 16:46:57 -- scripts/common.sh@344 -- # : 1 00:08:08.386 16:46:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:08.386 16:46:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.386 16:46:57 -- scripts/common.sh@364 -- # decimal 1 00:08:08.386 16:46:57 -- scripts/common.sh@352 -- # local d=1 00:08:08.386 16:46:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.386 16:46:57 -- scripts/common.sh@354 -- # echo 1 00:08:08.386 16:46:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:08.386 16:46:57 -- scripts/common.sh@365 -- # decimal 2 00:08:08.386 16:46:57 -- scripts/common.sh@352 -- # local d=2 00:08:08.386 16:46:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.386 16:46:57 -- scripts/common.sh@354 -- # echo 2 00:08:08.386 16:46:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:08.386 16:46:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:08.386 16:46:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:08.386 16:46:57 -- scripts/common.sh@367 -- # return 0 00:08:08.386 16:46:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.386 16:46:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:08.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.386 --rc genhtml_branch_coverage=1 00:08:08.386 --rc genhtml_function_coverage=1 00:08:08.386 --rc genhtml_legend=1 00:08:08.386 --rc geninfo_all_blocks=1 00:08:08.386 --rc geninfo_unexecuted_blocks=1 00:08:08.386 00:08:08.386 ' 00:08:08.386 16:46:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:08.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.386 --rc genhtml_branch_coverage=1 00:08:08.386 --rc genhtml_function_coverage=1 00:08:08.386 --rc genhtml_legend=1 00:08:08.386 --rc geninfo_all_blocks=1 00:08:08.386 --rc geninfo_unexecuted_blocks=1 00:08:08.386 00:08:08.386 ' 00:08:08.386 16:46:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:08.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.386 --rc genhtml_branch_coverage=1 00:08:08.386 --rc genhtml_function_coverage=1 00:08:08.386 --rc genhtml_legend=1 00:08:08.386 --rc geninfo_all_blocks=1 00:08:08.386 --rc geninfo_unexecuted_blocks=1 00:08:08.386 00:08:08.386 ' 00:08:08.386 16:46:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:08.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.386 --rc genhtml_branch_coverage=1 00:08:08.386 --rc genhtml_function_coverage=1 00:08:08.386 --rc genhtml_legend=1 00:08:08.386 --rc geninfo_all_blocks=1 00:08:08.386 --rc geninfo_unexecuted_blocks=1 00:08:08.386 00:08:08.386 ' 00:08:08.386 16:46:57 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:08.386 16:46:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:08.386 16:46:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.386 16:46:57 -- common/autotest_common.sh@10 -- # set +x 00:08:08.386 ************************************ 00:08:08.386 START TEST env_memory 00:08:08.386 ************************************ 00:08:08.386 16:46:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:08.646 00:08:08.646 00:08:08.646 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.646 http://cunit.sourceforge.net/ 00:08:08.646 00:08:08.646 00:08:08.646 Suite: memory 00:08:08.646 Test: alloc and free memory map ...[2024-11-05 16:46:57.332683] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:08.646 passed 00:08:08.646 Test: mem map translation ...[2024-11-05 16:46:57.381642] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:08.646 [2024-11-05 16:46:57.381761] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:08.646 [2024-11-05 16:46:57.381874] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:08.646 [2024-11-05 16:46:57.381964] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:08.646 passed 00:08:08.646 Test: mem map registration ...[2024-11-05 16:46:57.467957] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:08.646 [2024-11-05 16:46:57.468063] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:08.646 passed 00:08:08.905 Test: mem map adjacent registrations ...passed 00:08:08.905 00:08:08.905 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.905 suites 1 1 n/a 0 0 00:08:08.905 tests 4 4 4 0 0 00:08:08.905 asserts 152 152 152 0 n/a 00:08:08.905 00:08:08.905 Elapsed time = 0.296 seconds 00:08:08.905 00:08:08.905 real 0m0.331s 00:08:08.905 user 0m0.303s 00:08:08.905 sys 0m0.028s 00:08:08.905 16:46:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.905 16:46:57 -- common/autotest_common.sh@10 -- # set +x 00:08:08.906 ************************************ 00:08:08.906 END TEST env_memory 00:08:08.906 ************************************ 00:08:08.906 16:46:57 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:08.906 16:46:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:08.906 16:46:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.906 16:46:57 -- common/autotest_common.sh@10 -- # set +x 00:08:08.906 ************************************ 00:08:08.906 START TEST env_vtophys 00:08:08.906 ************************************ 00:08:08.906 16:46:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:08.906 EAL: lib.eal log level changed from notice to debug 00:08:08.906 EAL: Detected lcore 0 as core 0 on socket 0 00:08:08.906 EAL: Detected lcore 1 as core 0 on socket 0 00:08:08.906 EAL: Detected lcore 2 as core 0 on socket 0 00:08:08.906 EAL: Detected lcore 3 as core 0 on socket 0 00:08:08.906 EAL: Detected lcore 4 as core 0 on socket 0 00:08:08.906 EAL: Detected lcore 5 as core 0 on socket 0 00:08:08.906 EAL: Detected lcore 6 as core 0 on socket 0 00:08:08.906 EAL: Detected lcore 7 as core 0 on socket 0 00:08:08.906 EAL: Detected lcore 8 as core 0 on socket 0 00:08:08.906 EAL: Detected lcore 9 as core 0 on socket 0 00:08:08.906 EAL: Maximum logical cores by configuration: 128 00:08:08.906 EAL: Detected CPU lcores: 10 00:08:08.906 EAL: Detected NUMA nodes: 1 00:08:08.906 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:08:08.906 EAL: Checking presence of .so 'librte_eal.so.24' 00:08:08.906 EAL: Checking presence of .so 'librte_eal.so' 00:08:08.906 EAL: Detected static linkage of DPDK 00:08:08.906 EAL: No shared files mode enabled, IPC will be disabled 00:08:08.906 EAL: Selected IOVA mode 'PA' 00:08:08.906 EAL: Probing VFIO support... 00:08:08.906 EAL: IOMMU type 1 (Type 1) is supported 00:08:08.906 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:08.906 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:08.906 EAL: VFIO support initialized 00:08:08.906 EAL: Ask a virtual area of 0x2e000 bytes 00:08:08.906 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:08.906 EAL: Setting up physically contiguous memory... 00:08:08.906 EAL: Setting maximum number of open files to 1048576 00:08:08.906 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:08.906 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:08.906 EAL: Ask a virtual area of 0x61000 bytes 00:08:08.906 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:08.906 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:08.906 EAL: Ask a virtual area of 0x400000000 bytes 00:08:08.906 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:08.906 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:08.906 EAL: Ask a virtual area of 0x61000 bytes 00:08:08.906 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:08.906 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:08.906 EAL: Ask a virtual area of 0x400000000 bytes 00:08:08.906 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:08.906 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:08.906 EAL: Ask a virtual area of 0x61000 bytes 00:08:08.906 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:08.906 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:08.906 EAL: Ask a virtual area of 0x400000000 bytes 00:08:08.906 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:08.906 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:08.906 EAL: Ask a virtual area of 0x61000 bytes 00:08:08.906 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:08.906 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:08.906 EAL: Ask a virtual area of 0x400000000 bytes 00:08:08.906 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:08.906 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:08.906 EAL: Hugepages will be freed exactly as allocated. 00:08:08.906 EAL: No shared files mode enabled, IPC is disabled 00:08:08.906 EAL: No shared files mode enabled, IPC is disabled 00:08:09.164 EAL: TSC frequency is ~2200000 KHz 00:08:09.164 EAL: Main lcore 0 is ready (tid=7f4c48b17a80;cpuset=[0]) 00:08:09.164 EAL: Trying to obtain current memory policy. 00:08:09.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:09.164 EAL: Restoring previous memory policy: 0 00:08:09.164 EAL: request: mp_malloc_sync 00:08:09.164 EAL: No shared files mode enabled, IPC is disabled 00:08:09.164 EAL: Heap on socket 0 was expanded by 2MB 00:08:09.164 EAL: No shared files mode enabled, IPC is disabled 00:08:09.164 EAL: Mem event callback 'spdk:(nil)' registered 00:08:09.164 00:08:09.164 00:08:09.164 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.164 http://cunit.sourceforge.net/ 00:08:09.164 00:08:09.164 00:08:09.164 Suite: components_suite 00:08:09.423 Test: vtophys_malloc_test ...passed 00:08:09.423 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:09.423 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:09.423 EAL: Restoring previous memory policy: 0 00:08:09.423 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.423 EAL: request: mp_malloc_sync 00:08:09.423 EAL: No shared files mode enabled, IPC is disabled 00:08:09.423 EAL: Heap on socket 0 was expanded by 4MB 00:08:09.423 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.423 EAL: request: mp_malloc_sync 00:08:09.423 EAL: No shared files mode enabled, IPC is disabled 00:08:09.423 EAL: Heap on socket 0 was shrunk by 4MB 00:08:09.423 EAL: Trying to obtain current memory policy. 00:08:09.423 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:09.423 EAL: Restoring previous memory policy: 0 00:08:09.423 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.423 EAL: request: mp_malloc_sync 00:08:09.423 EAL: No shared files mode enabled, IPC is disabled 00:08:09.423 EAL: Heap on socket 0 was expanded by 6MB 00:08:09.423 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.423 EAL: request: mp_malloc_sync 00:08:09.423 EAL: No shared files mode enabled, IPC is disabled 00:08:09.423 EAL: Heap on socket 0 was shrunk by 6MB 00:08:09.423 EAL: Trying to obtain current memory policy. 00:08:09.423 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:09.423 EAL: Restoring previous memory policy: 0 00:08:09.423 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.423 EAL: request: mp_malloc_sync 00:08:09.423 EAL: No shared files mode enabled, IPC is disabled 00:08:09.423 EAL: Heap on socket 0 was expanded by 10MB 00:08:09.423 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.423 EAL: request: mp_malloc_sync 00:08:09.423 EAL: No shared files mode enabled, IPC is disabled 00:08:09.423 EAL: Heap on socket 0 was shrunk by 10MB 00:08:09.423 EAL: Trying to obtain current memory policy. 00:08:09.423 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:09.423 EAL: Restoring previous memory policy: 0 00:08:09.423 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.423 EAL: request: mp_malloc_sync 00:08:09.423 EAL: No shared files mode enabled, IPC is disabled 00:08:09.423 EAL: Heap on socket 0 was expanded by 18MB 00:08:09.423 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.423 EAL: request: mp_malloc_sync 00:08:09.423 EAL: No shared files mode enabled, IPC is disabled 00:08:09.423 EAL: Heap on socket 0 was shrunk by 18MB 00:08:09.682 EAL: Trying to obtain current memory policy. 00:08:09.682 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:09.682 EAL: Restoring previous memory policy: 0 00:08:09.682 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.682 EAL: request: mp_malloc_sync 00:08:09.682 EAL: No shared files mode enabled, IPC is disabled 00:08:09.682 EAL: Heap on socket 0 was expanded by 34MB 00:08:09.682 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.682 EAL: request: mp_malloc_sync 00:08:09.682 EAL: No shared files mode enabled, IPC is disabled 00:08:09.682 EAL: Heap on socket 0 was shrunk by 34MB 00:08:09.682 EAL: Trying to obtain current memory policy. 00:08:09.682 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:09.682 EAL: Restoring previous memory policy: 0 00:08:09.682 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.682 EAL: request: mp_malloc_sync 00:08:09.682 EAL: No shared files mode enabled, IPC is disabled 00:08:09.682 EAL: Heap on socket 0 was expanded by 66MB 00:08:09.682 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.682 EAL: request: mp_malloc_sync 00:08:09.682 EAL: No shared files mode enabled, IPC is disabled 00:08:09.682 EAL: Heap on socket 0 was shrunk by 66MB 00:08:09.941 EAL: Trying to obtain current memory policy. 00:08:09.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:09.941 EAL: Restoring previous memory policy: 0 00:08:09.941 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.941 EAL: request: mp_malloc_sync 00:08:09.941 EAL: No shared files mode enabled, IPC is disabled 00:08:09.941 EAL: Heap on socket 0 was expanded by 130MB 00:08:09.941 EAL: Calling mem event callback 'spdk:(nil)' 00:08:09.941 EAL: request: mp_malloc_sync 00:08:09.941 EAL: No shared files mode enabled, IPC is disabled 00:08:09.941 EAL: Heap on socket 0 was shrunk by 130MB 00:08:10.199 EAL: Trying to obtain current memory policy. 00:08:10.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.199 EAL: Restoring previous memory policy: 0 00:08:10.199 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.199 EAL: request: mp_malloc_sync 00:08:10.199 EAL: No shared files mode enabled, IPC is disabled 00:08:10.199 EAL: Heap on socket 0 was expanded by 258MB 00:08:10.458 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.716 EAL: request: mp_malloc_sync 00:08:10.716 EAL: No shared files mode enabled, IPC is disabled 00:08:10.716 EAL: Heap on socket 0 was shrunk by 258MB 00:08:10.973 EAL: Trying to obtain current memory policy. 00:08:10.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.973 EAL: Restoring previous memory policy: 0 00:08:10.973 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.973 EAL: request: mp_malloc_sync 00:08:10.973 EAL: No shared files mode enabled, IPC is disabled 00:08:10.973 EAL: Heap on socket 0 was expanded by 514MB 00:08:11.539 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.797 EAL: request: mp_malloc_sync 00:08:11.798 EAL: No shared files mode enabled, IPC is disabled 00:08:11.798 EAL: Heap on socket 0 was shrunk by 514MB 00:08:12.364 EAL: Trying to obtain current memory policy. 00:08:12.364 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:12.622 EAL: Restoring previous memory policy: 0 00:08:12.622 EAL: Calling mem event callback 'spdk:(nil)' 00:08:12.622 EAL: request: mp_malloc_sync 00:08:12.622 EAL: No shared files mode enabled, IPC is disabled 00:08:12.622 EAL: Heap on socket 0 was expanded by 1026MB 00:08:13.998 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.998 EAL: request: mp_malloc_sync 00:08:13.998 EAL: No shared files mode enabled, IPC is disabled 00:08:13.998 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:15.373 passed 00:08:15.373 00:08:15.373 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.374 suites 1 1 n/a 0 0 00:08:15.374 tests 2 2 2 0 0 00:08:15.374 asserts 6391 6391 6391 0 n/a 00:08:15.374 00:08:15.374 Elapsed time = 6.028 seconds 00:08:15.374 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.374 EAL: request: mp_malloc_sync 00:08:15.374 EAL: No shared files mode enabled, IPC is disabled 00:08:15.374 EAL: Heap on socket 0 was shrunk by 2MB 00:08:15.374 EAL: No shared files mode enabled, IPC is disabled 00:08:15.374 EAL: No shared files mode enabled, IPC is disabled 00:08:15.374 EAL: No shared files mode enabled, IPC is disabled 00:08:15.374 00:08:15.374 real 0m6.301s 00:08:15.374 user 0m5.256s 00:08:15.374 sys 0m0.907s 00:08:15.374 16:47:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.374 16:47:03 -- common/autotest_common.sh@10 -- # set +x 00:08:15.374 ************************************ 00:08:15.374 END TEST env_vtophys 00:08:15.374 ************************************ 00:08:15.374 16:47:03 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:15.374 16:47:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:15.374 16:47:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.374 16:47:03 -- common/autotest_common.sh@10 -- # set +x 00:08:15.374 ************************************ 00:08:15.374 START TEST env_pci 00:08:15.374 ************************************ 00:08:15.374 16:47:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:15.374 00:08:15.374 00:08:15.374 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.374 http://cunit.sourceforge.net/ 00:08:15.374 00:08:15.374 00:08:15.374 Suite: pci 00:08:15.374 Test: pci_hook ...[2024-11-05 16:47:04.042001] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 102779 has claimed it 00:08:15.374 passed 00:08:15.374 00:08:15.374 EAL: Cannot find device (10000:00:01.0) 00:08:15.374 EAL: Failed to attach device on primary process 00:08:15.374 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.374 suites 1 1 n/a 0 0 00:08:15.374 tests 1 1 1 0 0 00:08:15.374 asserts 25 25 25 0 n/a 00:08:15.374 00:08:15.374 Elapsed time = 0.005 seconds 00:08:15.374 00:08:15.374 real 0m0.077s 00:08:15.374 user 0m0.040s 00:08:15.374 sys 0m0.038s 00:08:15.374 16:47:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.374 16:47:04 -- common/autotest_common.sh@10 -- # set +x 00:08:15.374 ************************************ 00:08:15.374 END TEST env_pci 00:08:15.374 ************************************ 00:08:15.374 16:47:04 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:15.374 16:47:04 -- env/env.sh@15 -- # uname 00:08:15.374 16:47:04 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:15.374 16:47:04 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:15.374 16:47:04 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:15.374 16:47:04 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:15.374 16:47:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.374 16:47:04 -- common/autotest_common.sh@10 -- # set +x 00:08:15.374 ************************************ 00:08:15.374 START TEST env_dpdk_post_init 00:08:15.374 ************************************ 00:08:15.374 16:47:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:15.374 EAL: Detected CPU lcores: 10 00:08:15.374 EAL: Detected NUMA nodes: 1 00:08:15.374 EAL: Detected static linkage of DPDK 00:08:15.374 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:15.374 EAL: Selected IOVA mode 'PA' 00:08:15.374 EAL: VFIO support initialized 00:08:15.634 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:15.634 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:08:15.634 Starting DPDK initialization... 00:08:15.634 Starting SPDK post initialization... 00:08:15.634 SPDK NVMe probe 00:08:15.634 Attaching to 0000:00:06.0 00:08:15.634 Attached to 0000:00:06.0 00:08:15.634 Cleaning up... 00:08:15.634 00:08:15.634 real 0m0.264s 00:08:15.634 user 0m0.075s 00:08:15.634 sys 0m0.091s 00:08:15.634 16:47:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.634 16:47:04 -- common/autotest_common.sh@10 -- # set +x 00:08:15.634 ************************************ 00:08:15.634 END TEST env_dpdk_post_init 00:08:15.634 ************************************ 00:08:15.634 16:47:04 -- env/env.sh@26 -- # uname 00:08:15.634 16:47:04 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:15.634 16:47:04 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:15.634 16:47:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:15.634 16:47:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.634 16:47:04 -- common/autotest_common.sh@10 -- # set +x 00:08:15.634 ************************************ 00:08:15.634 START TEST env_mem_callbacks 00:08:15.634 ************************************ 00:08:15.634 16:47:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:15.634 EAL: Detected CPU lcores: 10 00:08:15.634 EAL: Detected NUMA nodes: 1 00:08:15.634 EAL: Detected static linkage of DPDK 00:08:15.892 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:15.892 EAL: Selected IOVA mode 'PA' 00:08:15.892 EAL: VFIO support initialized 00:08:15.892 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:15.892 00:08:15.892 00:08:15.892 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.892 http://cunit.sourceforge.net/ 00:08:15.892 00:08:15.892 00:08:15.892 Suite: memory 00:08:15.892 Test: test ... 00:08:15.892 register 0x200000200000 2097152 00:08:15.892 malloc 3145728 00:08:15.892 register 0x200000400000 4194304 00:08:15.892 buf 0x2000004fffc0 len 3145728 PASSED 00:08:15.892 malloc 64 00:08:15.892 buf 0x2000004ffec0 len 64 PASSED 00:08:15.892 malloc 4194304 00:08:15.892 register 0x200000800000 6291456 00:08:15.892 buf 0x2000009fffc0 len 4194304 PASSED 00:08:15.892 free 0x2000004fffc0 3145728 00:08:15.892 free 0x2000004ffec0 64 00:08:15.892 unregister 0x200000400000 4194304 PASSED 00:08:15.893 free 0x2000009fffc0 4194304 00:08:15.893 unregister 0x200000800000 6291456 PASSED 00:08:15.893 malloc 8388608 00:08:15.893 register 0x200000400000 10485760 00:08:15.893 buf 0x2000005fffc0 len 8388608 PASSED 00:08:15.893 free 0x2000005fffc0 8388608 00:08:15.893 unregister 0x200000400000 10485760 PASSED 00:08:15.893 passed 00:08:15.893 00:08:15.893 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.893 suites 1 1 n/a 0 0 00:08:15.893 tests 1 1 1 0 0 00:08:15.893 asserts 15 15 15 0 n/a 00:08:15.893 00:08:15.893 Elapsed time = 0.046 seconds 00:08:15.893 00:08:15.893 real 0m0.269s 00:08:15.893 user 0m0.108s 00:08:15.893 sys 0m0.060s 00:08:15.893 16:47:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.893 16:47:04 -- common/autotest_common.sh@10 -- # set +x 00:08:15.893 ************************************ 00:08:15.893 END TEST env_mem_callbacks 00:08:15.893 ************************************ 00:08:15.893 00:08:15.893 real 0m7.712s 00:08:15.893 user 0m6.104s 00:08:15.893 sys 0m1.266s 00:08:15.893 16:47:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.893 16:47:04 -- common/autotest_common.sh@10 -- # set +x 00:08:15.893 ************************************ 00:08:15.893 END TEST env 00:08:15.893 ************************************ 00:08:16.151 16:47:04 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:16.151 16:47:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:16.151 16:47:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.151 16:47:04 -- common/autotest_common.sh@10 -- # set +x 00:08:16.151 ************************************ 00:08:16.151 START TEST rpc 00:08:16.151 ************************************ 00:08:16.151 16:47:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:16.151 * Looking for test storage... 00:08:16.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:16.151 16:47:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:16.151 16:47:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:16.151 16:47:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:16.151 16:47:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:16.151 16:47:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:16.151 16:47:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:16.151 16:47:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:16.151 16:47:04 -- scripts/common.sh@335 -- # IFS=.-: 00:08:16.151 16:47:04 -- scripts/common.sh@335 -- # read -ra ver1 00:08:16.151 16:47:04 -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.151 16:47:04 -- scripts/common.sh@336 -- # read -ra ver2 00:08:16.151 16:47:04 -- scripts/common.sh@337 -- # local 'op=<' 00:08:16.151 16:47:04 -- scripts/common.sh@339 -- # ver1_l=2 00:08:16.151 16:47:04 -- scripts/common.sh@340 -- # ver2_l=1 00:08:16.151 16:47:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:16.151 16:47:04 -- scripts/common.sh@343 -- # case "$op" in 00:08:16.151 16:47:04 -- scripts/common.sh@344 -- # : 1 00:08:16.151 16:47:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:16.151 16:47:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.151 16:47:04 -- scripts/common.sh@364 -- # decimal 1 00:08:16.151 16:47:04 -- scripts/common.sh@352 -- # local d=1 00:08:16.151 16:47:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.151 16:47:04 -- scripts/common.sh@354 -- # echo 1 00:08:16.151 16:47:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:16.151 16:47:04 -- scripts/common.sh@365 -- # decimal 2 00:08:16.151 16:47:04 -- scripts/common.sh@352 -- # local d=2 00:08:16.151 16:47:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.151 16:47:04 -- scripts/common.sh@354 -- # echo 2 00:08:16.151 16:47:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:16.151 16:47:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:16.151 16:47:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:16.151 16:47:04 -- scripts/common.sh@367 -- # return 0 00:08:16.151 16:47:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.152 16:47:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:16.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.152 --rc genhtml_branch_coverage=1 00:08:16.152 --rc genhtml_function_coverage=1 00:08:16.152 --rc genhtml_legend=1 00:08:16.152 --rc geninfo_all_blocks=1 00:08:16.152 --rc geninfo_unexecuted_blocks=1 00:08:16.152 00:08:16.152 ' 00:08:16.152 16:47:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:16.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.152 --rc genhtml_branch_coverage=1 00:08:16.152 --rc genhtml_function_coverage=1 00:08:16.152 --rc genhtml_legend=1 00:08:16.152 --rc geninfo_all_blocks=1 00:08:16.152 --rc geninfo_unexecuted_blocks=1 00:08:16.152 00:08:16.152 ' 00:08:16.152 16:47:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:16.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.152 --rc genhtml_branch_coverage=1 00:08:16.152 --rc genhtml_function_coverage=1 00:08:16.152 --rc genhtml_legend=1 00:08:16.152 --rc geninfo_all_blocks=1 00:08:16.152 --rc geninfo_unexecuted_blocks=1 00:08:16.152 00:08:16.152 ' 00:08:16.152 16:47:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:16.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.152 --rc genhtml_branch_coverage=1 00:08:16.152 --rc genhtml_function_coverage=1 00:08:16.152 --rc genhtml_legend=1 00:08:16.152 --rc geninfo_all_blocks=1 00:08:16.152 --rc geninfo_unexecuted_blocks=1 00:08:16.152 00:08:16.152 ' 00:08:16.152 16:47:04 -- rpc/rpc.sh@65 -- # spdk_pid=102913 00:08:16.152 16:47:04 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:16.152 16:47:04 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:16.152 16:47:04 -- rpc/rpc.sh@67 -- # waitforlisten 102913 00:08:16.152 16:47:04 -- common/autotest_common.sh@829 -- # '[' -z 102913 ']' 00:08:16.152 16:47:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.152 16:47:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:16.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.152 16:47:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.152 16:47:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:16.152 16:47:04 -- common/autotest_common.sh@10 -- # set +x 00:08:16.410 [2024-11-05 16:47:05.064018] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:16.410 [2024-11-05 16:47:05.064231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102913 ] 00:08:16.410 [2024-11-05 16:47:05.225158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.668 [2024-11-05 16:47:05.383415] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:16.668 [2024-11-05 16:47:05.383908] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:16.668 [2024-11-05 16:47:05.384112] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 102913' to capture a snapshot of events at runtime. 00:08:16.668 [2024-11-05 16:47:05.384215] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid102913 for offline analysis/debug. 00:08:16.668 [2024-11-05 16:47:05.384326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.043 16:47:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.043 16:47:06 -- common/autotest_common.sh@862 -- # return 0 00:08:18.043 16:47:06 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:18.043 16:47:06 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:18.043 16:47:06 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:18.043 16:47:06 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:18.043 16:47:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.043 16:47:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.043 16:47:06 -- common/autotest_common.sh@10 -- # set +x 00:08:18.043 ************************************ 00:08:18.043 START TEST rpc_integrity 00:08:18.043 ************************************ 00:08:18.043 16:47:06 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:08:18.043 16:47:06 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:18.043 16:47:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.043 16:47:06 -- common/autotest_common.sh@10 -- # set +x 00:08:18.043 16:47:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.043 16:47:06 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:18.043 16:47:06 -- rpc/rpc.sh@13 -- # jq length 00:08:18.043 16:47:06 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:18.043 16:47:06 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:18.043 16:47:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.043 16:47:06 -- common/autotest_common.sh@10 -- # set +x 00:08:18.043 16:47:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.043 16:47:06 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:18.043 16:47:06 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:18.043 16:47:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.043 16:47:06 -- common/autotest_common.sh@10 -- # set +x 00:08:18.043 16:47:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.043 16:47:06 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:18.043 { 00:08:18.043 "name": "Malloc0", 00:08:18.043 "aliases": [ 00:08:18.043 "9bcefb7a-4a30-44b0-b028-257104845bd9" 00:08:18.043 ], 00:08:18.043 "product_name": "Malloc disk", 00:08:18.043 "block_size": 512, 00:08:18.043 "num_blocks": 16384, 00:08:18.043 "uuid": "9bcefb7a-4a30-44b0-b028-257104845bd9", 00:08:18.043 "assigned_rate_limits": { 00:08:18.043 "rw_ios_per_sec": 0, 00:08:18.043 "rw_mbytes_per_sec": 0, 00:08:18.043 "r_mbytes_per_sec": 0, 00:08:18.043 "w_mbytes_per_sec": 0 00:08:18.043 }, 00:08:18.043 "claimed": false, 00:08:18.043 "zoned": false, 00:08:18.043 "supported_io_types": { 00:08:18.043 "read": true, 00:08:18.043 "write": true, 00:08:18.043 "unmap": true, 00:08:18.043 "write_zeroes": true, 00:08:18.043 "flush": true, 00:08:18.043 "reset": true, 00:08:18.043 "compare": false, 00:08:18.043 "compare_and_write": false, 00:08:18.043 "abort": true, 00:08:18.043 "nvme_admin": false, 00:08:18.043 "nvme_io": false 00:08:18.043 }, 00:08:18.043 "memory_domains": [ 00:08:18.043 { 00:08:18.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.043 "dma_device_type": 2 00:08:18.043 } 00:08:18.043 ], 00:08:18.043 "driver_specific": {} 00:08:18.043 } 00:08:18.043 ]' 00:08:18.043 16:47:06 -- rpc/rpc.sh@17 -- # jq length 00:08:18.043 16:47:06 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:18.043 16:47:06 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:18.043 16:47:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.043 16:47:06 -- common/autotest_common.sh@10 -- # set +x 00:08:18.043 [2024-11-05 16:47:06.854296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:18.043 [2024-11-05 16:47:06.854524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.043 [2024-11-05 16:47:06.854598] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:08:18.043 [2024-11-05 16:47:06.854707] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.043 [2024-11-05 16:47:06.856889] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.043 [2024-11-05 16:47:06.857090] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:18.043 Passthru0 00:08:18.043 16:47:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.043 16:47:06 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:18.043 16:47:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.043 16:47:06 -- common/autotest_common.sh@10 -- # set +x 00:08:18.043 16:47:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.043 16:47:06 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:18.043 { 00:08:18.043 "name": "Malloc0", 00:08:18.043 "aliases": [ 00:08:18.043 "9bcefb7a-4a30-44b0-b028-257104845bd9" 00:08:18.043 ], 00:08:18.043 "product_name": "Malloc disk", 00:08:18.043 "block_size": 512, 00:08:18.043 "num_blocks": 16384, 00:08:18.044 "uuid": "9bcefb7a-4a30-44b0-b028-257104845bd9", 00:08:18.044 "assigned_rate_limits": { 00:08:18.044 "rw_ios_per_sec": 0, 00:08:18.044 "rw_mbytes_per_sec": 0, 00:08:18.044 "r_mbytes_per_sec": 0, 00:08:18.044 "w_mbytes_per_sec": 0 00:08:18.044 }, 00:08:18.044 "claimed": true, 00:08:18.044 "claim_type": "exclusive_write", 00:08:18.044 "zoned": false, 00:08:18.044 "supported_io_types": { 00:08:18.044 "read": true, 00:08:18.044 "write": true, 00:08:18.044 "unmap": true, 00:08:18.044 "write_zeroes": true, 00:08:18.044 "flush": true, 00:08:18.044 "reset": true, 00:08:18.044 "compare": false, 00:08:18.044 "compare_and_write": false, 00:08:18.044 "abort": true, 00:08:18.044 "nvme_admin": false, 00:08:18.044 "nvme_io": false 00:08:18.044 }, 00:08:18.044 "memory_domains": [ 00:08:18.044 { 00:08:18.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.044 "dma_device_type": 2 00:08:18.044 } 00:08:18.044 ], 00:08:18.044 "driver_specific": {} 00:08:18.044 }, 00:08:18.044 { 00:08:18.044 "name": "Passthru0", 00:08:18.044 "aliases": [ 00:08:18.044 "e9996331-f13c-5018-a9ab-4e474fac8d15" 00:08:18.044 ], 00:08:18.044 "product_name": "passthru", 00:08:18.044 "block_size": 512, 00:08:18.044 "num_blocks": 16384, 00:08:18.044 "uuid": "e9996331-f13c-5018-a9ab-4e474fac8d15", 00:08:18.044 "assigned_rate_limits": { 00:08:18.044 "rw_ios_per_sec": 0, 00:08:18.044 "rw_mbytes_per_sec": 0, 00:08:18.044 "r_mbytes_per_sec": 0, 00:08:18.044 "w_mbytes_per_sec": 0 00:08:18.044 }, 00:08:18.044 "claimed": false, 00:08:18.044 "zoned": false, 00:08:18.044 "supported_io_types": { 00:08:18.044 "read": true, 00:08:18.044 "write": true, 00:08:18.044 "unmap": true, 00:08:18.044 "write_zeroes": true, 00:08:18.044 "flush": true, 00:08:18.044 "reset": true, 00:08:18.044 "compare": false, 00:08:18.044 "compare_and_write": false, 00:08:18.044 "abort": true, 00:08:18.044 "nvme_admin": false, 00:08:18.044 "nvme_io": false 00:08:18.044 }, 00:08:18.044 "memory_domains": [ 00:08:18.044 { 00:08:18.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.044 "dma_device_type": 2 00:08:18.044 } 00:08:18.044 ], 00:08:18.044 "driver_specific": { 00:08:18.044 "passthru": { 00:08:18.044 "name": "Passthru0", 00:08:18.044 "base_bdev_name": "Malloc0" 00:08:18.044 } 00:08:18.044 } 00:08:18.044 } 00:08:18.044 ]' 00:08:18.044 16:47:06 -- rpc/rpc.sh@21 -- # jq length 00:08:18.044 16:47:06 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:18.044 16:47:06 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:18.044 16:47:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.044 16:47:06 -- common/autotest_common.sh@10 -- # set +x 00:08:18.303 16:47:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.303 16:47:06 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:18.303 16:47:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.303 16:47:06 -- common/autotest_common.sh@10 -- # set +x 00:08:18.303 16:47:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.303 16:47:06 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:18.303 16:47:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.303 16:47:06 -- common/autotest_common.sh@10 -- # set +x 00:08:18.303 16:47:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.303 16:47:06 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:18.303 16:47:06 -- rpc/rpc.sh@26 -- # jq length 00:08:18.303 16:47:07 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:18.303 00:08:18.303 real 0m0.317s 00:08:18.303 user 0m0.216s 00:08:18.303 sys 0m0.019s 00:08:18.303 16:47:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.303 ************************************ 00:08:18.303 END TEST rpc_integrity 00:08:18.303 ************************************ 00:08:18.303 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:18.303 16:47:07 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:18.303 16:47:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.303 16:47:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.303 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:18.303 ************************************ 00:08:18.303 START TEST rpc_plugins 00:08:18.303 ************************************ 00:08:18.303 16:47:07 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:08:18.303 16:47:07 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:18.303 16:47:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.303 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:18.303 16:47:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.303 16:47:07 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:18.303 16:47:07 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:18.303 16:47:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.303 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:18.303 16:47:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.303 16:47:07 -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:18.303 { 00:08:18.303 "name": "Malloc1", 00:08:18.303 "aliases": [ 00:08:18.303 "bb5e5a2e-47bc-430e-9a4d-ebbc2653abe6" 00:08:18.303 ], 00:08:18.303 "product_name": "Malloc disk", 00:08:18.303 "block_size": 4096, 00:08:18.303 "num_blocks": 256, 00:08:18.303 "uuid": "bb5e5a2e-47bc-430e-9a4d-ebbc2653abe6", 00:08:18.303 "assigned_rate_limits": { 00:08:18.303 "rw_ios_per_sec": 0, 00:08:18.303 "rw_mbytes_per_sec": 0, 00:08:18.303 "r_mbytes_per_sec": 0, 00:08:18.303 "w_mbytes_per_sec": 0 00:08:18.303 }, 00:08:18.303 "claimed": false, 00:08:18.303 "zoned": false, 00:08:18.303 "supported_io_types": { 00:08:18.303 "read": true, 00:08:18.303 "write": true, 00:08:18.303 "unmap": true, 00:08:18.303 "write_zeroes": true, 00:08:18.303 "flush": true, 00:08:18.303 "reset": true, 00:08:18.303 "compare": false, 00:08:18.303 "compare_and_write": false, 00:08:18.303 "abort": true, 00:08:18.303 "nvme_admin": false, 00:08:18.303 "nvme_io": false 00:08:18.303 }, 00:08:18.303 "memory_domains": [ 00:08:18.303 { 00:08:18.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.303 "dma_device_type": 2 00:08:18.303 } 00:08:18.303 ], 00:08:18.303 "driver_specific": {} 00:08:18.303 } 00:08:18.303 ]' 00:08:18.303 16:47:07 -- rpc/rpc.sh@32 -- # jq length 00:08:18.303 16:47:07 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:18.303 16:47:07 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:18.303 16:47:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.303 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:18.303 16:47:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.303 16:47:07 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:18.303 16:47:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.303 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:18.303 16:47:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.303 16:47:07 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:18.303 16:47:07 -- rpc/rpc.sh@36 -- # jq length 00:08:18.562 16:47:07 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:18.562 00:08:18.562 real 0m0.155s 00:08:18.562 user 0m0.103s 00:08:18.562 sys 0m0.014s 00:08:18.562 16:47:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.562 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:18.562 ************************************ 00:08:18.562 END TEST rpc_plugins 00:08:18.562 ************************************ 00:08:18.562 16:47:07 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:18.562 16:47:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.562 16:47:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.562 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:18.562 ************************************ 00:08:18.562 START TEST rpc_trace_cmd_test 00:08:18.562 ************************************ 00:08:18.562 16:47:07 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:08:18.562 16:47:07 -- rpc/rpc.sh@40 -- # local info 00:08:18.562 16:47:07 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:18.562 16:47:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.562 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:18.562 16:47:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.562 16:47:07 -- rpc/rpc.sh@42 -- # info='{ 00:08:18.562 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid102913", 00:08:18.562 "tpoint_group_mask": "0x8", 00:08:18.562 "iscsi_conn": { 00:08:18.562 "mask": "0x2", 00:08:18.562 "tpoint_mask": "0x0" 00:08:18.562 }, 00:08:18.562 "scsi": { 00:08:18.562 "mask": "0x4", 00:08:18.562 "tpoint_mask": "0x0" 00:08:18.562 }, 00:08:18.562 "bdev": { 00:08:18.562 "mask": "0x8", 00:08:18.562 "tpoint_mask": "0xffffffffffffffff" 00:08:18.562 }, 00:08:18.562 "nvmf_rdma": { 00:08:18.562 "mask": "0x10", 00:08:18.562 "tpoint_mask": "0x0" 00:08:18.562 }, 00:08:18.562 "nvmf_tcp": { 00:08:18.562 "mask": "0x20", 00:08:18.562 "tpoint_mask": "0x0" 00:08:18.562 }, 00:08:18.562 "ftl": { 00:08:18.562 "mask": "0x40", 00:08:18.562 "tpoint_mask": "0x0" 00:08:18.562 }, 00:08:18.562 "blobfs": { 00:08:18.562 "mask": "0x80", 00:08:18.562 "tpoint_mask": "0x0" 00:08:18.562 }, 00:08:18.562 "dsa": { 00:08:18.562 "mask": "0x200", 00:08:18.562 "tpoint_mask": "0x0" 00:08:18.562 }, 00:08:18.562 "thread": { 00:08:18.562 "mask": "0x400", 00:08:18.562 "tpoint_mask": "0x0" 00:08:18.562 }, 00:08:18.562 "nvme_pcie": { 00:08:18.562 "mask": "0x800", 00:08:18.562 "tpoint_mask": "0x0" 00:08:18.562 }, 00:08:18.562 "iaa": { 00:08:18.562 "mask": "0x1000", 00:08:18.562 "tpoint_mask": "0x0" 00:08:18.562 }, 00:08:18.562 "nvme_tcp": { 00:08:18.562 "mask": "0x2000", 00:08:18.562 "tpoint_mask": "0x0" 00:08:18.562 }, 00:08:18.562 "bdev_nvme": { 00:08:18.562 "mask": "0x4000", 00:08:18.562 "tpoint_mask": "0x0" 00:08:18.562 } 00:08:18.562 }' 00:08:18.562 16:47:07 -- rpc/rpc.sh@43 -- # jq length 00:08:18.562 16:47:07 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:08:18.562 16:47:07 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:18.562 16:47:07 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:18.562 16:47:07 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:18.562 16:47:07 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:18.562 16:47:07 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:18.821 16:47:07 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:18.821 16:47:07 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:18.821 16:47:07 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:18.821 00:08:18.821 real 0m0.263s 00:08:18.821 user 0m0.233s 00:08:18.821 sys 0m0.026s 00:08:18.821 16:47:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.821 ************************************ 00:08:18.821 END TEST rpc_trace_cmd_test 00:08:18.821 ************************************ 00:08:18.821 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:18.821 16:47:07 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:18.821 16:47:07 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:18.821 16:47:07 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:18.821 16:47:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.821 16:47:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.821 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:18.821 ************************************ 00:08:18.821 START TEST rpc_daemon_integrity 00:08:18.821 ************************************ 00:08:18.821 16:47:07 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:08:18.821 16:47:07 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:18.821 16:47:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.821 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:18.821 16:47:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.821 16:47:07 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:18.821 16:47:07 -- rpc/rpc.sh@13 -- # jq length 00:08:18.821 16:47:07 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:18.821 16:47:07 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:18.821 16:47:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.821 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:18.821 16:47:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.821 16:47:07 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:18.821 16:47:07 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:18.821 16:47:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.821 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:18.821 16:47:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.821 16:47:07 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:18.821 { 00:08:18.821 "name": "Malloc2", 00:08:18.821 "aliases": [ 00:08:18.821 "f6f60ea0-27d8-405b-809e-9a466a8b14c8" 00:08:18.821 ], 00:08:18.821 "product_name": "Malloc disk", 00:08:18.821 "block_size": 512, 00:08:18.821 "num_blocks": 16384, 00:08:18.821 "uuid": "f6f60ea0-27d8-405b-809e-9a466a8b14c8", 00:08:18.821 "assigned_rate_limits": { 00:08:18.821 "rw_ios_per_sec": 0, 00:08:18.821 "rw_mbytes_per_sec": 0, 00:08:18.821 "r_mbytes_per_sec": 0, 00:08:18.821 "w_mbytes_per_sec": 0 00:08:18.821 }, 00:08:18.821 "claimed": false, 00:08:18.821 "zoned": false, 00:08:18.822 "supported_io_types": { 00:08:18.822 "read": true, 00:08:18.822 "write": true, 00:08:18.822 "unmap": true, 00:08:18.822 "write_zeroes": true, 00:08:18.822 "flush": true, 00:08:18.822 "reset": true, 00:08:18.822 "compare": false, 00:08:18.822 "compare_and_write": false, 00:08:18.822 "abort": true, 00:08:18.822 "nvme_admin": false, 00:08:18.822 "nvme_io": false 00:08:18.822 }, 00:08:18.822 "memory_domains": [ 00:08:18.822 { 00:08:18.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.822 "dma_device_type": 2 00:08:18.822 } 00:08:18.822 ], 00:08:18.822 "driver_specific": {} 00:08:18.822 } 00:08:18.822 ]' 00:08:18.822 16:47:07 -- rpc/rpc.sh@17 -- # jq length 00:08:19.081 16:47:07 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:19.081 16:47:07 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:19.081 16:47:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.081 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:19.081 [2024-11-05 16:47:07.733470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:19.081 [2024-11-05 16:47:07.733662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.081 [2024-11-05 16:47:07.733736] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:19.081 [2024-11-05 16:47:07.733931] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.081 [2024-11-05 16:47:07.736255] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.081 [2024-11-05 16:47:07.736431] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:19.081 Passthru0 00:08:19.081 16:47:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.081 16:47:07 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:19.081 16:47:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.081 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:19.081 16:47:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.081 16:47:07 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:19.081 { 00:08:19.081 "name": "Malloc2", 00:08:19.081 "aliases": [ 00:08:19.081 "f6f60ea0-27d8-405b-809e-9a466a8b14c8" 00:08:19.081 ], 00:08:19.081 "product_name": "Malloc disk", 00:08:19.081 "block_size": 512, 00:08:19.081 "num_blocks": 16384, 00:08:19.081 "uuid": "f6f60ea0-27d8-405b-809e-9a466a8b14c8", 00:08:19.081 "assigned_rate_limits": { 00:08:19.081 "rw_ios_per_sec": 0, 00:08:19.081 "rw_mbytes_per_sec": 0, 00:08:19.081 "r_mbytes_per_sec": 0, 00:08:19.081 "w_mbytes_per_sec": 0 00:08:19.081 }, 00:08:19.081 "claimed": true, 00:08:19.081 "claim_type": "exclusive_write", 00:08:19.081 "zoned": false, 00:08:19.081 "supported_io_types": { 00:08:19.081 "read": true, 00:08:19.081 "write": true, 00:08:19.081 "unmap": true, 00:08:19.081 "write_zeroes": true, 00:08:19.081 "flush": true, 00:08:19.081 "reset": true, 00:08:19.081 "compare": false, 00:08:19.081 "compare_and_write": false, 00:08:19.081 "abort": true, 00:08:19.081 "nvme_admin": false, 00:08:19.081 "nvme_io": false 00:08:19.081 }, 00:08:19.081 "memory_domains": [ 00:08:19.081 { 00:08:19.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.081 "dma_device_type": 2 00:08:19.081 } 00:08:19.081 ], 00:08:19.081 "driver_specific": {} 00:08:19.081 }, 00:08:19.081 { 00:08:19.081 "name": "Passthru0", 00:08:19.081 "aliases": [ 00:08:19.081 "31af0b98-6c0f-51c6-a3c3-d05a4db7b042" 00:08:19.081 ], 00:08:19.081 "product_name": "passthru", 00:08:19.081 "block_size": 512, 00:08:19.081 "num_blocks": 16384, 00:08:19.081 "uuid": "31af0b98-6c0f-51c6-a3c3-d05a4db7b042", 00:08:19.081 "assigned_rate_limits": { 00:08:19.081 "rw_ios_per_sec": 0, 00:08:19.081 "rw_mbytes_per_sec": 0, 00:08:19.081 "r_mbytes_per_sec": 0, 00:08:19.081 "w_mbytes_per_sec": 0 00:08:19.081 }, 00:08:19.081 "claimed": false, 00:08:19.081 "zoned": false, 00:08:19.081 "supported_io_types": { 00:08:19.081 "read": true, 00:08:19.081 "write": true, 00:08:19.081 "unmap": true, 00:08:19.081 "write_zeroes": true, 00:08:19.081 "flush": true, 00:08:19.081 "reset": true, 00:08:19.081 "compare": false, 00:08:19.081 "compare_and_write": false, 00:08:19.081 "abort": true, 00:08:19.081 "nvme_admin": false, 00:08:19.081 "nvme_io": false 00:08:19.081 }, 00:08:19.081 "memory_domains": [ 00:08:19.081 { 00:08:19.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.081 "dma_device_type": 2 00:08:19.081 } 00:08:19.081 ], 00:08:19.081 "driver_specific": { 00:08:19.081 "passthru": { 00:08:19.081 "name": "Passthru0", 00:08:19.081 "base_bdev_name": "Malloc2" 00:08:19.081 } 00:08:19.081 } 00:08:19.081 } 00:08:19.081 ]' 00:08:19.081 16:47:07 -- rpc/rpc.sh@21 -- # jq length 00:08:19.081 16:47:07 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:19.081 16:47:07 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:19.081 16:47:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.081 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:19.081 16:47:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.081 16:47:07 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:19.081 16:47:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.081 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:19.081 16:47:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.081 16:47:07 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:19.081 16:47:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.081 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:19.081 16:47:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.081 16:47:07 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:19.081 16:47:07 -- rpc/rpc.sh@26 -- # jq length 00:08:19.081 16:47:07 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:19.081 00:08:19.081 real 0m0.309s 00:08:19.081 user 0m0.206s 00:08:19.081 sys 0m0.028s 00:08:19.081 16:47:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.081 ************************************ 00:08:19.081 16:47:07 -- common/autotest_common.sh@10 -- # set +x 00:08:19.081 END TEST rpc_daemon_integrity 00:08:19.081 ************************************ 00:08:19.081 16:47:07 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:19.081 16:47:07 -- rpc/rpc.sh@84 -- # killprocess 102913 00:08:19.081 16:47:07 -- common/autotest_common.sh@936 -- # '[' -z 102913 ']' 00:08:19.081 16:47:07 -- common/autotest_common.sh@940 -- # kill -0 102913 00:08:19.081 16:47:07 -- common/autotest_common.sh@941 -- # uname 00:08:19.081 16:47:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:19.081 16:47:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 102913 00:08:19.081 16:47:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:19.081 16:47:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:19.081 16:47:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 102913' 00:08:19.081 killing process with pid 102913 00:08:19.081 16:47:07 -- common/autotest_common.sh@955 -- # kill 102913 00:08:19.081 16:47:07 -- common/autotest_common.sh@960 -- # wait 102913 00:08:20.995 00:08:20.995 real 0m4.928s 00:08:20.995 user 0m5.873s 00:08:20.995 sys 0m0.714s 00:08:20.995 16:47:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.995 16:47:09 -- common/autotest_common.sh@10 -- # set +x 00:08:20.995 ************************************ 00:08:20.995 END TEST rpc 00:08:20.995 ************************************ 00:08:20.995 16:47:09 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:20.995 16:47:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:20.995 16:47:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.995 16:47:09 -- common/autotest_common.sh@10 -- # set +x 00:08:20.995 ************************************ 00:08:20.995 START TEST rpc_client 00:08:20.995 ************************************ 00:08:20.995 16:47:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:20.995 * Looking for test storage... 00:08:20.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:20.995 16:47:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:20.995 16:47:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:20.995 16:47:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:21.253 16:47:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:21.253 16:47:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:21.253 16:47:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:21.253 16:47:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:21.253 16:47:09 -- scripts/common.sh@335 -- # IFS=.-: 00:08:21.253 16:47:09 -- scripts/common.sh@335 -- # read -ra ver1 00:08:21.253 16:47:09 -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.253 16:47:09 -- scripts/common.sh@336 -- # read -ra ver2 00:08:21.253 16:47:09 -- scripts/common.sh@337 -- # local 'op=<' 00:08:21.253 16:47:09 -- scripts/common.sh@339 -- # ver1_l=2 00:08:21.253 16:47:09 -- scripts/common.sh@340 -- # ver2_l=1 00:08:21.253 16:47:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:21.253 16:47:09 -- scripts/common.sh@343 -- # case "$op" in 00:08:21.253 16:47:09 -- scripts/common.sh@344 -- # : 1 00:08:21.253 16:47:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:21.253 16:47:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.253 16:47:09 -- scripts/common.sh@364 -- # decimal 1 00:08:21.253 16:47:09 -- scripts/common.sh@352 -- # local d=1 00:08:21.253 16:47:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.253 16:47:09 -- scripts/common.sh@354 -- # echo 1 00:08:21.253 16:47:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:21.253 16:47:09 -- scripts/common.sh@365 -- # decimal 2 00:08:21.253 16:47:09 -- scripts/common.sh@352 -- # local d=2 00:08:21.253 16:47:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.253 16:47:09 -- scripts/common.sh@354 -- # echo 2 00:08:21.253 16:47:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:21.253 16:47:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:21.253 16:47:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:21.253 16:47:09 -- scripts/common.sh@367 -- # return 0 00:08:21.253 16:47:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.253 16:47:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:21.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.253 --rc genhtml_branch_coverage=1 00:08:21.253 --rc genhtml_function_coverage=1 00:08:21.253 --rc genhtml_legend=1 00:08:21.253 --rc geninfo_all_blocks=1 00:08:21.253 --rc geninfo_unexecuted_blocks=1 00:08:21.253 00:08:21.253 ' 00:08:21.253 16:47:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:21.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.253 --rc genhtml_branch_coverage=1 00:08:21.253 --rc genhtml_function_coverage=1 00:08:21.253 --rc genhtml_legend=1 00:08:21.253 --rc geninfo_all_blocks=1 00:08:21.253 --rc geninfo_unexecuted_blocks=1 00:08:21.253 00:08:21.253 ' 00:08:21.253 16:47:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:21.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.253 --rc genhtml_branch_coverage=1 00:08:21.253 --rc genhtml_function_coverage=1 00:08:21.253 --rc genhtml_legend=1 00:08:21.253 --rc geninfo_all_blocks=1 00:08:21.253 --rc geninfo_unexecuted_blocks=1 00:08:21.253 00:08:21.253 ' 00:08:21.253 16:47:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:21.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.253 --rc genhtml_branch_coverage=1 00:08:21.253 --rc genhtml_function_coverage=1 00:08:21.253 --rc genhtml_legend=1 00:08:21.253 --rc geninfo_all_blocks=1 00:08:21.253 --rc geninfo_unexecuted_blocks=1 00:08:21.253 00:08:21.253 ' 00:08:21.253 16:47:09 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:21.253 OK 00:08:21.253 16:47:10 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:21.253 00:08:21.253 real 0m0.219s 00:08:21.253 user 0m0.138s 00:08:21.253 sys 0m0.098s 00:08:21.253 16:47:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.253 16:47:10 -- common/autotest_common.sh@10 -- # set +x 00:08:21.253 ************************************ 00:08:21.253 END TEST rpc_client 00:08:21.253 ************************************ 00:08:21.253 16:47:10 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:21.253 16:47:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:21.253 16:47:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.253 16:47:10 -- common/autotest_common.sh@10 -- # set +x 00:08:21.253 ************************************ 00:08:21.253 START TEST json_config 00:08:21.253 ************************************ 00:08:21.253 16:47:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:21.253 16:47:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:21.253 16:47:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:21.253 16:47:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:21.512 16:47:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:21.512 16:47:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:21.512 16:47:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:21.512 16:47:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:21.512 16:47:10 -- scripts/common.sh@335 -- # IFS=.-: 00:08:21.512 16:47:10 -- scripts/common.sh@335 -- # read -ra ver1 00:08:21.512 16:47:10 -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.512 16:47:10 -- scripts/common.sh@336 -- # read -ra ver2 00:08:21.512 16:47:10 -- scripts/common.sh@337 -- # local 'op=<' 00:08:21.512 16:47:10 -- scripts/common.sh@339 -- # ver1_l=2 00:08:21.512 16:47:10 -- scripts/common.sh@340 -- # ver2_l=1 00:08:21.512 16:47:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:21.512 16:47:10 -- scripts/common.sh@343 -- # case "$op" in 00:08:21.512 16:47:10 -- scripts/common.sh@344 -- # : 1 00:08:21.512 16:47:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:21.512 16:47:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.512 16:47:10 -- scripts/common.sh@364 -- # decimal 1 00:08:21.512 16:47:10 -- scripts/common.sh@352 -- # local d=1 00:08:21.512 16:47:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.512 16:47:10 -- scripts/common.sh@354 -- # echo 1 00:08:21.512 16:47:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:21.512 16:47:10 -- scripts/common.sh@365 -- # decimal 2 00:08:21.512 16:47:10 -- scripts/common.sh@352 -- # local d=2 00:08:21.512 16:47:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.512 16:47:10 -- scripts/common.sh@354 -- # echo 2 00:08:21.512 16:47:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:21.512 16:47:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:21.512 16:47:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:21.512 16:47:10 -- scripts/common.sh@367 -- # return 0 00:08:21.512 16:47:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.512 16:47:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:21.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.512 --rc genhtml_branch_coverage=1 00:08:21.512 --rc genhtml_function_coverage=1 00:08:21.512 --rc genhtml_legend=1 00:08:21.512 --rc geninfo_all_blocks=1 00:08:21.512 --rc geninfo_unexecuted_blocks=1 00:08:21.512 00:08:21.512 ' 00:08:21.512 16:47:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:21.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.512 --rc genhtml_branch_coverage=1 00:08:21.512 --rc genhtml_function_coverage=1 00:08:21.512 --rc genhtml_legend=1 00:08:21.512 --rc geninfo_all_blocks=1 00:08:21.512 --rc geninfo_unexecuted_blocks=1 00:08:21.512 00:08:21.512 ' 00:08:21.512 16:47:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:21.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.512 --rc genhtml_branch_coverage=1 00:08:21.512 --rc genhtml_function_coverage=1 00:08:21.512 --rc genhtml_legend=1 00:08:21.512 --rc geninfo_all_blocks=1 00:08:21.512 --rc geninfo_unexecuted_blocks=1 00:08:21.512 00:08:21.512 ' 00:08:21.512 16:47:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:21.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.512 --rc genhtml_branch_coverage=1 00:08:21.512 --rc genhtml_function_coverage=1 00:08:21.512 --rc genhtml_legend=1 00:08:21.512 --rc geninfo_all_blocks=1 00:08:21.512 --rc geninfo_unexecuted_blocks=1 00:08:21.512 00:08:21.512 ' 00:08:21.512 16:47:10 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:21.512 16:47:10 -- nvmf/common.sh@7 -- # uname -s 00:08:21.512 16:47:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.512 16:47:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.512 16:47:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.512 16:47:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.512 16:47:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.512 16:47:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.512 16:47:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.512 16:47:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.512 16:47:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.512 16:47:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.512 16:47:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5fb1661c-a1e8-4d2a-8489-9b4af597eeb3 00:08:21.512 16:47:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5fb1661c-a1e8-4d2a-8489-9b4af597eeb3 00:08:21.512 16:47:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.512 16:47:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.512 16:47:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:21.512 16:47:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:21.512 16:47:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.512 16:47:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.512 16:47:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.512 16:47:10 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:21.512 16:47:10 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:21.513 16:47:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:21.513 16:47:10 -- paths/export.sh@5 -- # export PATH 00:08:21.513 16:47:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:21.513 16:47:10 -- nvmf/common.sh@46 -- # : 0 00:08:21.513 16:47:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:21.513 16:47:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:21.513 16:47:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:21.513 16:47:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.513 16:47:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.513 16:47:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:21.513 16:47:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:21.513 16:47:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:21.513 16:47:10 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:08:21.513 16:47:10 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:08:21.513 16:47:10 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:08:21.513 16:47:10 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:21.513 16:47:10 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:08:21.513 16:47:10 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:08:21.513 16:47:10 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:21.513 16:47:10 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:08:21.513 16:47:10 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:21.513 16:47:10 -- json_config/json_config.sh@32 -- # declare -A app_params 00:08:21.513 16:47:10 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:08:21.513 16:47:10 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:08:21.513 16:47:10 -- json_config/json_config.sh@43 -- # last_event_id=0 00:08:21.513 16:47:10 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:21.513 16:47:10 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:08:21.513 INFO: JSON configuration test init 00:08:21.513 16:47:10 -- json_config/json_config.sh@420 -- # json_config_test_init 00:08:21.513 16:47:10 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:08:21.513 16:47:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.513 16:47:10 -- common/autotest_common.sh@10 -- # set +x 00:08:21.513 16:47:10 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:08:21.513 16:47:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.513 16:47:10 -- common/autotest_common.sh@10 -- # set +x 00:08:21.513 16:47:10 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:08:21.513 16:47:10 -- json_config/json_config.sh@98 -- # local app=target 00:08:21.513 16:47:10 -- json_config/json_config.sh@99 -- # shift 00:08:21.513 16:47:10 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:21.513 16:47:10 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:21.513 16:47:10 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:21.513 16:47:10 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:21.513 16:47:10 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:21.513 16:47:10 -- json_config/json_config.sh@111 -- # app_pid[$app]=103223 00:08:21.513 16:47:10 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:21.513 Waiting for target to run... 00:08:21.513 16:47:10 -- json_config/json_config.sh@114 -- # waitforlisten 103223 /var/tmp/spdk_tgt.sock 00:08:21.513 16:47:10 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:21.513 16:47:10 -- common/autotest_common.sh@829 -- # '[' -z 103223 ']' 00:08:21.513 16:47:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:21.513 16:47:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.513 16:47:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:21.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:21.513 16:47:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.513 16:47:10 -- common/autotest_common.sh@10 -- # set +x 00:08:21.513 [2024-11-05 16:47:10.302449] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:21.513 [2024-11-05 16:47:10.302643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103223 ] 00:08:22.079 [2024-11-05 16:47:10.781302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.079 [2024-11-05 16:47:10.926302] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:22.079 [2024-11-05 16:47:10.926789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.645 00:08:22.645 16:47:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.645 16:47:11 -- common/autotest_common.sh@862 -- # return 0 00:08:22.645 16:47:11 -- json_config/json_config.sh@115 -- # echo '' 00:08:22.645 16:47:11 -- json_config/json_config.sh@322 -- # create_accel_config 00:08:22.645 16:47:11 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:08:22.645 16:47:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:22.645 16:47:11 -- common/autotest_common.sh@10 -- # set +x 00:08:22.645 16:47:11 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:08:22.645 16:47:11 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:08:22.645 16:47:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:22.645 16:47:11 -- common/autotest_common.sh@10 -- # set +x 00:08:22.645 16:47:11 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:22.645 16:47:11 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:08:22.645 16:47:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:23.212 16:47:12 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:08:23.212 16:47:12 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:08:23.212 16:47:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:23.212 16:47:12 -- common/autotest_common.sh@10 -- # set +x 00:08:23.212 16:47:12 -- json_config/json_config.sh@48 -- # local ret=0 00:08:23.212 16:47:12 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:23.212 16:47:12 -- json_config/json_config.sh@49 -- # local enabled_types 00:08:23.212 16:47:12 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:23.212 16:47:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:23.212 16:47:12 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:23.470 16:47:12 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:08:23.470 16:47:12 -- json_config/json_config.sh@51 -- # local get_types 00:08:23.470 16:47:12 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:08:23.470 16:47:12 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:08:23.470 16:47:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:23.470 16:47:12 -- common/autotest_common.sh@10 -- # set +x 00:08:23.470 16:47:12 -- json_config/json_config.sh@58 -- # return 0 00:08:23.470 16:47:12 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:08:23.470 16:47:12 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:08:23.470 16:47:12 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:08:23.470 16:47:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:23.470 16:47:12 -- common/autotest_common.sh@10 -- # set +x 00:08:23.470 16:47:12 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:08:23.470 16:47:12 -- json_config/json_config.sh@160 -- # local expected_notifications 00:08:23.470 16:47:12 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:08:23.470 16:47:12 -- json_config/json_config.sh@164 -- # get_notifications 00:08:23.470 16:47:12 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:23.728 16:47:12 -- json_config/json_config.sh@64 -- # IFS=: 00:08:23.728 16:47:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:23.728 16:47:12 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:23.728 16:47:12 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:23.728 16:47:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:23.728 16:47:12 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:23.728 16:47:12 -- json_config/json_config.sh@64 -- # IFS=: 00:08:23.728 16:47:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:23.986 16:47:12 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:08:23.986 16:47:12 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:08:23.986 16:47:12 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:08:23.986 16:47:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:08:23.986 Nvme0n1p0 Nvme0n1p1 00:08:23.986 16:47:12 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:08:23.986 16:47:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:08:24.245 [2024-11-05 16:47:13.086566] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:24.245 [2024-11-05 16:47:13.086655] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:24.245 00:08:24.245 16:47:13 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:08:24.245 16:47:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:08:24.502 Malloc3 00:08:24.502 16:47:13 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:24.502 16:47:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:24.761 [2024-11-05 16:47:13.505700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:24.761 [2024-11-05 16:47:13.505794] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.761 [2024-11-05 16:47:13.505830] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:24.761 [2024-11-05 16:47:13.505857] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.761 [2024-11-05 16:47:13.508199] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.761 [2024-11-05 16:47:13.508252] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:24.761 PTBdevFromMalloc3 00:08:24.761 16:47:13 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:08:24.761 16:47:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:08:25.019 Null0 00:08:25.019 16:47:13 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:08:25.019 16:47:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:08:25.019 Malloc0 00:08:25.281 16:47:13 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:08:25.281 16:47:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:08:25.281 Malloc1 00:08:25.281 16:47:14 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:08:25.281 16:47:14 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:08:25.541 102400+0 records in 00:08:25.541 102400+0 records out 00:08:25.541 104857600 bytes (105 MB, 100 MiB) copied, 0.280929 s, 373 MB/s 00:08:25.541 16:47:14 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:08:25.541 16:47:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:08:25.800 aio_disk 00:08:25.800 16:47:14 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:08:25.800 16:47:14 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:25.800 16:47:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:26.058 edaaaf32-deec-4339-8d46-240942434140 00:08:26.058 16:47:14 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:08:26.058 16:47:14 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:08:26.058 16:47:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:08:26.316 16:47:15 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:08:26.316 16:47:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:08:26.316 16:47:15 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:26.316 16:47:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:26.588 16:47:15 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:26.588 16:47:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:26.861 16:47:15 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:08:26.861 16:47:15 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:08:26.861 16:47:15 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:776cd4f6-4ac2-471b-a9a1-cfb787d7bf69 bdev_register:26a8d736-c605-43a9-8724-b9bb202f1387 bdev_register:841080cb-05e8-4494-9e2f-7a704ae36409 bdev_register:b15d86e5-9ac2-4587-b335-be7debd322bb 00:08:26.861 16:47:15 -- json_config/json_config.sh@70 -- # local events_to_check 00:08:26.861 16:47:15 -- json_config/json_config.sh@71 -- # local recorded_events 00:08:26.861 16:47:15 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:08:26.861 16:47:15 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:776cd4f6-4ac2-471b-a9a1-cfb787d7bf69 bdev_register:26a8d736-c605-43a9-8724-b9bb202f1387 bdev_register:841080cb-05e8-4494-9e2f-7a704ae36409 bdev_register:b15d86e5-9ac2-4587-b335-be7debd322bb 00:08:26.861 16:47:15 -- json_config/json_config.sh@74 -- # sort 00:08:26.861 16:47:15 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:08:26.861 16:47:15 -- json_config/json_config.sh@75 -- # get_notifications 00:08:26.861 16:47:15 -- json_config/json_config.sh@75 -- # sort 00:08:26.862 16:47:15 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:26.862 16:47:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:26.862 16:47:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:26.862 16:47:15 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:26.862 16:47:15 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:26.862 16:47:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:27.121 16:47:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:27.121 16:47:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:27.121 16:47:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:27.121 16:47:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:27.121 16:47:15 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:27.121 16:47:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:27.121 16:47:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:27.121 16:47:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:27.121 16:47:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:27.121 16:47:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:27.121 16:47:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:27.121 16:47:15 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:27.121 16:47:15 -- json_config/json_config.sh@65 -- # echo bdev_register:776cd4f6-4ac2-471b-a9a1-cfb787d7bf69 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:27.121 16:47:15 -- json_config/json_config.sh@65 -- # echo bdev_register:26a8d736-c605-43a9-8724-b9bb202f1387 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:27.121 16:47:15 -- json_config/json_config.sh@65 -- # echo bdev_register:841080cb-05e8-4494-9e2f-7a704ae36409 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:27.121 16:47:15 -- json_config/json_config.sh@65 -- # echo bdev_register:b15d86e5-9ac2-4587-b335-be7debd322bb 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:27.121 16:47:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:27.121 16:47:15 -- json_config/json_config.sh@77 -- # [[ bdev_register:26a8d736-c605-43a9-8724-b9bb202f1387 bdev_register:776cd4f6-4ac2-471b-a9a1-cfb787d7bf69 bdev_register:841080cb-05e8-4494-9e2f-7a704ae36409 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:b15d86e5-9ac2-4587-b335-be7debd322bb != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\6\a\8\d\7\3\6\-\c\6\0\5\-\4\3\a\9\-\8\7\2\4\-\b\9\b\b\2\0\2\f\1\3\8\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\7\6\c\d\4\f\6\-\4\a\c\2\-\4\7\1\b\-\a\9\a\1\-\c\f\b\7\8\7\d\7\b\f\6\9\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\4\1\0\8\0\c\b\-\0\5\e\8\-\4\4\9\4\-\9\e\2\f\-\7\a\7\0\4\a\e\3\6\4\0\9\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\1\5\d\8\6\e\5\-\9\a\c\2\-\4\5\8\7\-\b\3\3\5\-\b\e\7\d\e\b\d\3\2\2\b\b ]] 00:08:27.121 16:47:15 -- json_config/json_config.sh@89 -- # cat 00:08:27.121 16:47:15 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:26a8d736-c605-43a9-8724-b9bb202f1387 bdev_register:776cd4f6-4ac2-471b-a9a1-cfb787d7bf69 bdev_register:841080cb-05e8-4494-9e2f-7a704ae36409 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:b15d86e5-9ac2-4587-b335-be7debd322bb 00:08:27.121 Expected events matched: 00:08:27.121 bdev_register:26a8d736-c605-43a9-8724-b9bb202f1387 00:08:27.121 bdev_register:776cd4f6-4ac2-471b-a9a1-cfb787d7bf69 00:08:27.121 bdev_register:841080cb-05e8-4494-9e2f-7a704ae36409 00:08:27.121 bdev_register:Malloc0 00:08:27.121 bdev_register:Malloc0p0 00:08:27.121 bdev_register:Malloc0p1 00:08:27.121 bdev_register:Malloc0p2 00:08:27.121 bdev_register:Malloc1 00:08:27.121 bdev_register:Malloc3 00:08:27.121 bdev_register:Null0 00:08:27.121 bdev_register:Nvme0n1 00:08:27.121 bdev_register:Nvme0n1p0 00:08:27.121 bdev_register:Nvme0n1p1 00:08:27.121 bdev_register:PTBdevFromMalloc3 00:08:27.121 bdev_register:aio_disk 00:08:27.121 bdev_register:b15d86e5-9ac2-4587-b335-be7debd322bb 00:08:27.121 16:47:15 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:08:27.121 16:47:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:27.121 16:47:15 -- common/autotest_common.sh@10 -- # set +x 00:08:27.121 16:47:15 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:08:27.121 16:47:15 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:08:27.121 16:47:15 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:08:27.121 16:47:15 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:08:27.121 16:47:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:27.121 16:47:15 -- common/autotest_common.sh@10 -- # set +x 00:08:27.121 16:47:16 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:08:27.121 16:47:16 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:27.122 16:47:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:27.380 MallocBdevForConfigChangeCheck 00:08:27.380 16:47:16 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:08:27.380 16:47:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:27.380 16:47:16 -- common/autotest_common.sh@10 -- # set +x 00:08:27.639 16:47:16 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:08:27.639 16:47:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:27.897 INFO: shutting down applications... 00:08:27.897 16:47:16 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:08:27.897 16:47:16 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:08:27.897 16:47:16 -- json_config/json_config.sh@431 -- # json_config_clear target 00:08:27.897 16:47:16 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:08:27.897 16:47:16 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:27.897 [2024-11-05 16:47:16.774473] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:08:28.155 Calling clear_vhost_scsi_subsystem 00:08:28.155 Calling clear_iscsi_subsystem 00:08:28.155 Calling clear_vhost_blk_subsystem 00:08:28.155 Calling clear_nbd_subsystem 00:08:28.155 Calling clear_nvmf_subsystem 00:08:28.155 Calling clear_bdev_subsystem 00:08:28.155 Calling clear_accel_subsystem 00:08:28.156 Calling clear_iobuf_subsystem 00:08:28.156 Calling clear_sock_subsystem 00:08:28.156 Calling clear_vmd_subsystem 00:08:28.156 Calling clear_scheduler_subsystem 00:08:28.156 16:47:16 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:08:28.156 16:47:16 -- json_config/json_config.sh@396 -- # count=100 00:08:28.156 16:47:16 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:08:28.156 16:47:16 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:28.156 16:47:16 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:28.156 16:47:16 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:08:28.722 16:47:17 -- json_config/json_config.sh@398 -- # break 00:08:28.722 16:47:17 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:08:28.722 16:47:17 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:08:28.722 16:47:17 -- json_config/json_config.sh@120 -- # local app=target 00:08:28.722 16:47:17 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:08:28.722 16:47:17 -- json_config/json_config.sh@124 -- # [[ -n 103223 ]] 00:08:28.722 16:47:17 -- json_config/json_config.sh@127 -- # kill -SIGINT 103223 00:08:28.722 16:47:17 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:08:28.722 16:47:17 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:28.722 16:47:17 -- json_config/json_config.sh@130 -- # kill -0 103223 00:08:28.722 16:47:17 -- json_config/json_config.sh@134 -- # sleep 0.5 00:08:28.981 16:47:17 -- json_config/json_config.sh@129 -- # (( i++ )) 00:08:28.981 16:47:17 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:28.981 16:47:17 -- json_config/json_config.sh@130 -- # kill -0 103223 00:08:28.981 16:47:17 -- json_config/json_config.sh@134 -- # sleep 0.5 00:08:29.548 16:47:18 -- json_config/json_config.sh@129 -- # (( i++ )) 00:08:29.548 16:47:18 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:29.548 16:47:18 -- json_config/json_config.sh@130 -- # kill -0 103223 00:08:29.548 16:47:18 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:08:29.548 16:47:18 -- json_config/json_config.sh@132 -- # break 00:08:29.548 16:47:18 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:08:29.548 SPDK target shutdown done 00:08:29.548 16:47:18 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:08:29.548 16:47:18 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:08:29.548 INFO: relaunching applications... 00:08:29.548 16:47:18 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:29.548 16:47:18 -- json_config/json_config.sh@98 -- # local app=target 00:08:29.548 16:47:18 -- json_config/json_config.sh@99 -- # shift 00:08:29.548 16:47:18 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:29.548 16:47:18 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:29.548 16:47:18 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:29.548 16:47:18 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:29.548 16:47:18 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:29.548 16:47:18 -- json_config/json_config.sh@111 -- # app_pid[$app]=103482 00:08:29.548 16:47:18 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:29.548 Waiting for target to run... 00:08:29.548 16:47:18 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:29.548 16:47:18 -- json_config/json_config.sh@114 -- # waitforlisten 103482 /var/tmp/spdk_tgt.sock 00:08:29.548 16:47:18 -- common/autotest_common.sh@829 -- # '[' -z 103482 ']' 00:08:29.548 16:47:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:29.548 16:47:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:29.548 16:47:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:29.548 16:47:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.548 16:47:18 -- common/autotest_common.sh@10 -- # set +x 00:08:29.548 [2024-11-05 16:47:18.383850] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:29.548 [2024-11-05 16:47:18.384036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103482 ] 00:08:30.116 [2024-11-05 16:47:18.817461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.116 [2024-11-05 16:47:18.967295] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:30.116 [2024-11-05 16:47:18.967519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.682 [2024-11-05 16:47:19.525417] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:30.682 [2024-11-05 16:47:19.525538] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:30.682 [2024-11-05 16:47:19.533388] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:30.682 [2024-11-05 16:47:19.533457] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:30.682 [2024-11-05 16:47:19.541405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:30.682 [2024-11-05 16:47:19.541486] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:30.682 [2024-11-05 16:47:19.541517] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:30.948 [2024-11-05 16:47:19.631679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:30.948 [2024-11-05 16:47:19.631764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.948 [2024-11-05 16:47:19.631803] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:30.948 [2024-11-05 16:47:19.631831] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.948 [2024-11-05 16:47:19.632326] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.948 [2024-11-05 16:47:19.632395] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:31.214 16:47:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.214 16:47:19 -- common/autotest_common.sh@862 -- # return 0 00:08:31.214 00:08:31.214 16:47:19 -- json_config/json_config.sh@115 -- # echo '' 00:08:31.214 16:47:19 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:08:31.214 INFO: Checking if target configuration is the same... 00:08:31.214 16:47:19 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:31.214 16:47:19 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:31.214 16:47:19 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:08:31.214 16:47:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:31.214 + '[' 2 -ne 2 ']' 00:08:31.214 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:31.214 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:31.214 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:31.214 +++ basename /dev/fd/62 00:08:31.214 ++ mktemp /tmp/62.XXX 00:08:31.214 + tmp_file_1=/tmp/62.Lv7 00:08:31.214 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:31.214 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:31.214 + tmp_file_2=/tmp/spdk_tgt_config.json.toV 00:08:31.214 + ret=0 00:08:31.214 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:31.472 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:31.472 + diff -u /tmp/62.Lv7 /tmp/spdk_tgt_config.json.toV 00:08:31.472 INFO: JSON config files are the same 00:08:31.472 + echo 'INFO: JSON config files are the same' 00:08:31.472 + rm /tmp/62.Lv7 /tmp/spdk_tgt_config.json.toV 00:08:31.472 + exit 0 00:08:31.472 16:47:20 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:08:31.472 16:47:20 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:31.472 INFO: changing configuration and checking if this can be detected... 00:08:31.472 16:47:20 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:31.472 16:47:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:31.731 16:47:20 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:31.731 16:47:20 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:08:31.731 16:47:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:31.731 + '[' 2 -ne 2 ']' 00:08:31.731 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:31.731 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:31.731 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:31.731 +++ basename /dev/fd/62 00:08:31.990 ++ mktemp /tmp/62.XXX 00:08:31.990 + tmp_file_1=/tmp/62.fQQ 00:08:31.990 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:31.990 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:31.990 + tmp_file_2=/tmp/spdk_tgt_config.json.MS0 00:08:31.990 + ret=0 00:08:31.990 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:32.248 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:32.248 + diff -u /tmp/62.fQQ /tmp/spdk_tgt_config.json.MS0 00:08:32.248 + ret=1 00:08:32.248 + echo '=== Start of file: /tmp/62.fQQ ===' 00:08:32.248 + cat /tmp/62.fQQ 00:08:32.248 + echo '=== End of file: /tmp/62.fQQ ===' 00:08:32.248 + echo '' 00:08:32.248 + echo '=== Start of file: /tmp/spdk_tgt_config.json.MS0 ===' 00:08:32.248 + cat /tmp/spdk_tgt_config.json.MS0 00:08:32.248 + echo '=== End of file: /tmp/spdk_tgt_config.json.MS0 ===' 00:08:32.248 + echo '' 00:08:32.248 + rm /tmp/62.fQQ /tmp/spdk_tgt_config.json.MS0 00:08:32.248 + exit 1 00:08:32.248 INFO: configuration change detected. 00:08:32.248 16:47:20 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:08:32.248 16:47:20 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:08:32.248 16:47:20 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:08:32.248 16:47:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.248 16:47:20 -- common/autotest_common.sh@10 -- # set +x 00:08:32.248 16:47:20 -- json_config/json_config.sh@360 -- # local ret=0 00:08:32.248 16:47:20 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:08:32.248 16:47:20 -- json_config/json_config.sh@370 -- # [[ -n 103482 ]] 00:08:32.248 16:47:20 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:08:32.248 16:47:20 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:08:32.248 16:47:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.248 16:47:20 -- common/autotest_common.sh@10 -- # set +x 00:08:32.248 16:47:20 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:08:32.248 16:47:20 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:08:32.248 16:47:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:08:32.507 16:47:21 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:08:32.507 16:47:21 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:08:32.766 16:47:21 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:08:32.766 16:47:21 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:08:32.766 16:47:21 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:08:32.766 16:47:21 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:08:33.025 16:47:21 -- json_config/json_config.sh@246 -- # uname -s 00:08:33.025 16:47:21 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:08:33.025 16:47:21 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:08:33.025 16:47:21 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:08:33.025 16:47:21 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:08:33.025 16:47:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:33.025 16:47:21 -- common/autotest_common.sh@10 -- # set +x 00:08:33.025 16:47:21 -- json_config/json_config.sh@376 -- # killprocess 103482 00:08:33.025 16:47:21 -- common/autotest_common.sh@936 -- # '[' -z 103482 ']' 00:08:33.025 16:47:21 -- common/autotest_common.sh@940 -- # kill -0 103482 00:08:33.025 16:47:21 -- common/autotest_common.sh@941 -- # uname 00:08:33.025 16:47:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:33.025 16:47:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103482 00:08:33.025 16:47:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:33.025 killing process with pid 103482 00:08:33.025 16:47:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:33.025 16:47:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103482' 00:08:33.025 16:47:21 -- common/autotest_common.sh@955 -- # kill 103482 00:08:33.025 16:47:21 -- common/autotest_common.sh@960 -- # wait 103482 00:08:33.960 16:47:22 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:33.960 16:47:22 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:08:33.960 16:47:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:33.960 16:47:22 -- common/autotest_common.sh@10 -- # set +x 00:08:33.960 16:47:22 -- json_config/json_config.sh@381 -- # return 0 00:08:33.960 INFO: Success 00:08:33.960 16:47:22 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:08:33.960 00:08:33.960 real 0m12.669s 00:08:33.960 user 0m17.954s 00:08:33.960 sys 0m2.210s 00:08:33.960 16:47:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:33.960 16:47:22 -- common/autotest_common.sh@10 -- # set +x 00:08:33.960 ************************************ 00:08:33.960 END TEST json_config 00:08:33.960 ************************************ 00:08:33.960 16:47:22 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:33.960 16:47:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:33.960 16:47:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.960 16:47:22 -- common/autotest_common.sh@10 -- # set +x 00:08:33.960 ************************************ 00:08:33.960 START TEST json_config_extra_key 00:08:33.960 ************************************ 00:08:33.960 16:47:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:33.960 16:47:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:33.960 16:47:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:33.960 16:47:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:34.220 16:47:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:34.220 16:47:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:34.220 16:47:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:34.220 16:47:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:34.220 16:47:22 -- scripts/common.sh@335 -- # IFS=.-: 00:08:34.220 16:47:22 -- scripts/common.sh@335 -- # read -ra ver1 00:08:34.220 16:47:22 -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.220 16:47:22 -- scripts/common.sh@336 -- # read -ra ver2 00:08:34.220 16:47:22 -- scripts/common.sh@337 -- # local 'op=<' 00:08:34.220 16:47:22 -- scripts/common.sh@339 -- # ver1_l=2 00:08:34.220 16:47:22 -- scripts/common.sh@340 -- # ver2_l=1 00:08:34.220 16:47:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:34.220 16:47:22 -- scripts/common.sh@343 -- # case "$op" in 00:08:34.220 16:47:22 -- scripts/common.sh@344 -- # : 1 00:08:34.220 16:47:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:34.220 16:47:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.220 16:47:22 -- scripts/common.sh@364 -- # decimal 1 00:08:34.220 16:47:22 -- scripts/common.sh@352 -- # local d=1 00:08:34.220 16:47:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.220 16:47:22 -- scripts/common.sh@354 -- # echo 1 00:08:34.220 16:47:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:34.220 16:47:22 -- scripts/common.sh@365 -- # decimal 2 00:08:34.220 16:47:22 -- scripts/common.sh@352 -- # local d=2 00:08:34.220 16:47:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.220 16:47:22 -- scripts/common.sh@354 -- # echo 2 00:08:34.220 16:47:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:34.220 16:47:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:34.220 16:47:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:34.220 16:47:22 -- scripts/common.sh@367 -- # return 0 00:08:34.220 16:47:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.220 16:47:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:34.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.221 --rc genhtml_branch_coverage=1 00:08:34.221 --rc genhtml_function_coverage=1 00:08:34.221 --rc genhtml_legend=1 00:08:34.221 --rc geninfo_all_blocks=1 00:08:34.221 --rc geninfo_unexecuted_blocks=1 00:08:34.221 00:08:34.221 ' 00:08:34.221 16:47:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:34.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.221 --rc genhtml_branch_coverage=1 00:08:34.221 --rc genhtml_function_coverage=1 00:08:34.221 --rc genhtml_legend=1 00:08:34.221 --rc geninfo_all_blocks=1 00:08:34.221 --rc geninfo_unexecuted_blocks=1 00:08:34.221 00:08:34.221 ' 00:08:34.221 16:47:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:34.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.221 --rc genhtml_branch_coverage=1 00:08:34.221 --rc genhtml_function_coverage=1 00:08:34.221 --rc genhtml_legend=1 00:08:34.221 --rc geninfo_all_blocks=1 00:08:34.221 --rc geninfo_unexecuted_blocks=1 00:08:34.221 00:08:34.221 ' 00:08:34.221 16:47:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:34.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.221 --rc genhtml_branch_coverage=1 00:08:34.221 --rc genhtml_function_coverage=1 00:08:34.221 --rc genhtml_legend=1 00:08:34.221 --rc geninfo_all_blocks=1 00:08:34.221 --rc geninfo_unexecuted_blocks=1 00:08:34.221 00:08:34.221 ' 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:34.221 16:47:22 -- nvmf/common.sh@7 -- # uname -s 00:08:34.221 16:47:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.221 16:47:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.221 16:47:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.221 16:47:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.221 16:47:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.221 16:47:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.221 16:47:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.221 16:47:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.221 16:47:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.221 16:47:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.221 16:47:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:afe5eaab-fa94-469f-95f9-809b09993e9a 00:08:34.221 16:47:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=afe5eaab-fa94-469f-95f9-809b09993e9a 00:08:34.221 16:47:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.221 16:47:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.221 16:47:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:34.221 16:47:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.221 16:47:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.221 16:47:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.221 16:47:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.221 16:47:22 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:34.221 16:47:22 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:34.221 16:47:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:34.221 16:47:22 -- paths/export.sh@5 -- # export PATH 00:08:34.221 16:47:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:34.221 16:47:22 -- nvmf/common.sh@46 -- # : 0 00:08:34.221 16:47:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:34.221 16:47:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:34.221 16:47:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:34.221 16:47:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.221 16:47:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.221 16:47:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:34.221 16:47:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:34.221 16:47:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:34.221 INFO: launching applications... 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@25 -- # shift 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=103667 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:08:34.221 Waiting for target to run... 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:34.221 16:47:22 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 103667 /var/tmp/spdk_tgt.sock 00:08:34.221 16:47:22 -- common/autotest_common.sh@829 -- # '[' -z 103667 ']' 00:08:34.221 16:47:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:34.221 16:47:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:34.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:34.222 16:47:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:34.222 16:47:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:34.222 16:47:22 -- common/autotest_common.sh@10 -- # set +x 00:08:34.222 [2024-11-05 16:47:23.007584] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:34.222 [2024-11-05 16:47:23.007795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103667 ] 00:08:34.789 [2024-11-05 16:47:23.445224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.789 [2024-11-05 16:47:23.593460] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:34.789 [2024-11-05 16:47:23.593719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.167 16:47:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.167 16:47:24 -- common/autotest_common.sh@862 -- # return 0 00:08:36.167 00:08:36.167 16:47:24 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:08:36.167 INFO: shutting down applications... 00:08:36.167 16:47:24 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:08:36.167 16:47:24 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:08:36.167 16:47:24 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:08:36.167 16:47:24 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:08:36.167 16:47:24 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 103667 ]] 00:08:36.167 16:47:24 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 103667 00:08:36.167 16:47:24 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:08:36.167 16:47:24 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:36.167 16:47:24 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103667 00:08:36.167 16:47:24 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:36.478 16:47:25 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:36.478 16:47:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:36.478 16:47:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103667 00:08:36.478 16:47:25 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:37.045 16:47:25 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:37.045 16:47:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:37.045 16:47:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103667 00:08:37.045 16:47:25 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:37.304 16:47:26 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:37.304 16:47:26 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:37.304 16:47:26 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103667 00:08:37.304 16:47:26 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:37.871 16:47:26 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:37.871 16:47:26 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:37.871 16:47:26 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103667 00:08:37.871 16:47:26 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:38.438 16:47:27 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:38.438 16:47:27 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:38.438 16:47:27 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103667 00:08:38.438 16:47:27 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:08:38.438 16:47:27 -- json_config/json_config_extra_key.sh@52 -- # break 00:08:38.438 16:47:27 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:08:38.438 SPDK target shutdown done 00:08:38.438 16:47:27 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:08:38.438 Success 00:08:38.438 16:47:27 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:08:38.438 00:08:38.438 real 0m4.422s 00:08:38.438 user 0m4.022s 00:08:38.438 sys 0m0.587s 00:08:38.438 16:47:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:38.438 16:47:27 -- common/autotest_common.sh@10 -- # set +x 00:08:38.438 ************************************ 00:08:38.438 END TEST json_config_extra_key 00:08:38.438 ************************************ 00:08:38.438 16:47:27 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:38.438 16:47:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:38.438 16:47:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.438 16:47:27 -- common/autotest_common.sh@10 -- # set +x 00:08:38.438 ************************************ 00:08:38.438 START TEST alias_rpc 00:08:38.438 ************************************ 00:08:38.438 16:47:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:38.438 * Looking for test storage... 00:08:38.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:38.438 16:47:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:38.438 16:47:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:38.438 16:47:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:38.697 16:47:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:38.697 16:47:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:38.697 16:47:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:38.697 16:47:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:38.697 16:47:27 -- scripts/common.sh@335 -- # IFS=.-: 00:08:38.697 16:47:27 -- scripts/common.sh@335 -- # read -ra ver1 00:08:38.697 16:47:27 -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.697 16:47:27 -- scripts/common.sh@336 -- # read -ra ver2 00:08:38.697 16:47:27 -- scripts/common.sh@337 -- # local 'op=<' 00:08:38.697 16:47:27 -- scripts/common.sh@339 -- # ver1_l=2 00:08:38.697 16:47:27 -- scripts/common.sh@340 -- # ver2_l=1 00:08:38.697 16:47:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:38.697 16:47:27 -- scripts/common.sh@343 -- # case "$op" in 00:08:38.697 16:47:27 -- scripts/common.sh@344 -- # : 1 00:08:38.697 16:47:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:38.697 16:47:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.697 16:47:27 -- scripts/common.sh@364 -- # decimal 1 00:08:38.697 16:47:27 -- scripts/common.sh@352 -- # local d=1 00:08:38.697 16:47:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.697 16:47:27 -- scripts/common.sh@354 -- # echo 1 00:08:38.697 16:47:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:38.697 16:47:27 -- scripts/common.sh@365 -- # decimal 2 00:08:38.697 16:47:27 -- scripts/common.sh@352 -- # local d=2 00:08:38.697 16:47:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.697 16:47:27 -- scripts/common.sh@354 -- # echo 2 00:08:38.697 16:47:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:38.697 16:47:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:38.697 16:47:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:38.697 16:47:27 -- scripts/common.sh@367 -- # return 0 00:08:38.697 16:47:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.697 16:47:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:38.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.697 --rc genhtml_branch_coverage=1 00:08:38.697 --rc genhtml_function_coverage=1 00:08:38.697 --rc genhtml_legend=1 00:08:38.697 --rc geninfo_all_blocks=1 00:08:38.697 --rc geninfo_unexecuted_blocks=1 00:08:38.697 00:08:38.697 ' 00:08:38.697 16:47:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:38.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.697 --rc genhtml_branch_coverage=1 00:08:38.697 --rc genhtml_function_coverage=1 00:08:38.697 --rc genhtml_legend=1 00:08:38.697 --rc geninfo_all_blocks=1 00:08:38.697 --rc geninfo_unexecuted_blocks=1 00:08:38.697 00:08:38.697 ' 00:08:38.697 16:47:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:38.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.697 --rc genhtml_branch_coverage=1 00:08:38.697 --rc genhtml_function_coverage=1 00:08:38.697 --rc genhtml_legend=1 00:08:38.697 --rc geninfo_all_blocks=1 00:08:38.697 --rc geninfo_unexecuted_blocks=1 00:08:38.697 00:08:38.697 ' 00:08:38.697 16:47:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:38.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.697 --rc genhtml_branch_coverage=1 00:08:38.697 --rc genhtml_function_coverage=1 00:08:38.697 --rc genhtml_legend=1 00:08:38.697 --rc geninfo_all_blocks=1 00:08:38.697 --rc geninfo_unexecuted_blocks=1 00:08:38.697 00:08:38.697 ' 00:08:38.697 16:47:27 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:38.697 16:47:27 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=103793 00:08:38.697 16:47:27 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 103793 00:08:38.697 16:47:27 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:38.697 16:47:27 -- common/autotest_common.sh@829 -- # '[' -z 103793 ']' 00:08:38.697 16:47:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.697 16:47:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.697 16:47:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.697 16:47:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.697 16:47:27 -- common/autotest_common.sh@10 -- # set +x 00:08:38.697 [2024-11-05 16:47:27.481796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:38.697 [2024-11-05 16:47:27.482011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103793 ] 00:08:38.956 [2024-11-05 16:47:27.646487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.956 [2024-11-05 16:47:27.806084] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:38.956 [2024-11-05 16:47:27.806305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.333 16:47:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.333 16:47:29 -- common/autotest_common.sh@862 -- # return 0 00:08:40.333 16:47:29 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:40.591 16:47:29 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 103793 00:08:40.591 16:47:29 -- common/autotest_common.sh@936 -- # '[' -z 103793 ']' 00:08:40.591 16:47:29 -- common/autotest_common.sh@940 -- # kill -0 103793 00:08:40.591 16:47:29 -- common/autotest_common.sh@941 -- # uname 00:08:40.591 16:47:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:40.591 16:47:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103793 00:08:40.591 16:47:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:40.591 16:47:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:40.591 killing process with pid 103793 00:08:40.591 16:47:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103793' 00:08:40.591 16:47:29 -- common/autotest_common.sh@955 -- # kill 103793 00:08:40.591 16:47:29 -- common/autotest_common.sh@960 -- # wait 103793 00:08:42.496 ************************************ 00:08:42.496 END TEST alias_rpc 00:08:42.496 ************************************ 00:08:42.496 00:08:42.496 real 0m3.911s 00:08:42.496 user 0m4.147s 00:08:42.496 sys 0m0.591s 00:08:42.496 16:47:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:42.496 16:47:31 -- common/autotest_common.sh@10 -- # set +x 00:08:42.496 16:47:31 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:08:42.496 16:47:31 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:42.496 16:47:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:42.496 16:47:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.496 16:47:31 -- common/autotest_common.sh@10 -- # set +x 00:08:42.496 ************************************ 00:08:42.496 START TEST spdkcli_tcp 00:08:42.496 ************************************ 00:08:42.496 16:47:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:42.496 * Looking for test storage... 00:08:42.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:42.496 16:47:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:42.496 16:47:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:42.496 16:47:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:42.496 16:47:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:42.496 16:47:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:42.496 16:47:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:42.496 16:47:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:42.496 16:47:31 -- scripts/common.sh@335 -- # IFS=.-: 00:08:42.496 16:47:31 -- scripts/common.sh@335 -- # read -ra ver1 00:08:42.496 16:47:31 -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.496 16:47:31 -- scripts/common.sh@336 -- # read -ra ver2 00:08:42.496 16:47:31 -- scripts/common.sh@337 -- # local 'op=<' 00:08:42.496 16:47:31 -- scripts/common.sh@339 -- # ver1_l=2 00:08:42.496 16:47:31 -- scripts/common.sh@340 -- # ver2_l=1 00:08:42.496 16:47:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:42.496 16:47:31 -- scripts/common.sh@343 -- # case "$op" in 00:08:42.496 16:47:31 -- scripts/common.sh@344 -- # : 1 00:08:42.496 16:47:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:42.496 16:47:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.496 16:47:31 -- scripts/common.sh@364 -- # decimal 1 00:08:42.496 16:47:31 -- scripts/common.sh@352 -- # local d=1 00:08:42.496 16:47:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.496 16:47:31 -- scripts/common.sh@354 -- # echo 1 00:08:42.496 16:47:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:42.496 16:47:31 -- scripts/common.sh@365 -- # decimal 2 00:08:42.496 16:47:31 -- scripts/common.sh@352 -- # local d=2 00:08:42.496 16:47:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.496 16:47:31 -- scripts/common.sh@354 -- # echo 2 00:08:42.496 16:47:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:42.496 16:47:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:42.496 16:47:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:42.496 16:47:31 -- scripts/common.sh@367 -- # return 0 00:08:42.496 16:47:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.496 16:47:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:42.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.496 --rc genhtml_branch_coverage=1 00:08:42.496 --rc genhtml_function_coverage=1 00:08:42.496 --rc genhtml_legend=1 00:08:42.496 --rc geninfo_all_blocks=1 00:08:42.496 --rc geninfo_unexecuted_blocks=1 00:08:42.496 00:08:42.496 ' 00:08:42.496 16:47:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:42.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.496 --rc genhtml_branch_coverage=1 00:08:42.496 --rc genhtml_function_coverage=1 00:08:42.496 --rc genhtml_legend=1 00:08:42.496 --rc geninfo_all_blocks=1 00:08:42.496 --rc geninfo_unexecuted_blocks=1 00:08:42.496 00:08:42.496 ' 00:08:42.496 16:47:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:42.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.496 --rc genhtml_branch_coverage=1 00:08:42.496 --rc genhtml_function_coverage=1 00:08:42.496 --rc genhtml_legend=1 00:08:42.496 --rc geninfo_all_blocks=1 00:08:42.496 --rc geninfo_unexecuted_blocks=1 00:08:42.496 00:08:42.496 ' 00:08:42.496 16:47:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:42.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.496 --rc genhtml_branch_coverage=1 00:08:42.496 --rc genhtml_function_coverage=1 00:08:42.496 --rc genhtml_legend=1 00:08:42.496 --rc geninfo_all_blocks=1 00:08:42.496 --rc geninfo_unexecuted_blocks=1 00:08:42.496 00:08:42.496 ' 00:08:42.496 16:47:31 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:42.496 16:47:31 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:42.496 16:47:31 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:42.496 16:47:31 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:42.496 16:47:31 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:42.496 16:47:31 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:42.496 16:47:31 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:42.496 16:47:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:42.496 16:47:31 -- common/autotest_common.sh@10 -- # set +x 00:08:42.496 16:47:31 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=103905 00:08:42.496 16:47:31 -- spdkcli/tcp.sh@27 -- # waitforlisten 103905 00:08:42.496 16:47:31 -- common/autotest_common.sh@829 -- # '[' -z 103905 ']' 00:08:42.496 16:47:31 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:42.496 16:47:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.496 16:47:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:42.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.496 16:47:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.496 16:47:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:42.496 16:47:31 -- common/autotest_common.sh@10 -- # set +x 00:08:42.755 [2024-11-05 16:47:31.450562] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:42.755 [2024-11-05 16:47:31.450770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103905 ] 00:08:42.755 [2024-11-05 16:47:31.623520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:43.014 [2024-11-05 16:47:31.792361] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:43.014 [2024-11-05 16:47:31.792884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.014 [2024-11-05 16:47:31.792882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.390 16:47:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.390 16:47:33 -- common/autotest_common.sh@862 -- # return 0 00:08:44.390 16:47:33 -- spdkcli/tcp.sh@31 -- # socat_pid=103941 00:08:44.390 16:47:33 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:44.390 16:47:33 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:44.650 [ 00:08:44.650 "spdk_get_version", 00:08:44.650 "rpc_get_methods", 00:08:44.650 "trace_get_info", 00:08:44.650 "trace_get_tpoint_group_mask", 00:08:44.650 "trace_disable_tpoint_group", 00:08:44.650 "trace_enable_tpoint_group", 00:08:44.650 "trace_clear_tpoint_mask", 00:08:44.650 "trace_set_tpoint_mask", 00:08:44.650 "framework_get_pci_devices", 00:08:44.650 "framework_get_config", 00:08:44.650 "framework_get_subsystems", 00:08:44.650 "iobuf_get_stats", 00:08:44.650 "iobuf_set_options", 00:08:44.650 "sock_set_default_impl", 00:08:44.650 "sock_impl_set_options", 00:08:44.650 "sock_impl_get_options", 00:08:44.650 "vmd_rescan", 00:08:44.650 "vmd_remove_device", 00:08:44.650 "vmd_enable", 00:08:44.650 "accel_get_stats", 00:08:44.650 "accel_set_options", 00:08:44.650 "accel_set_driver", 00:08:44.650 "accel_crypto_key_destroy", 00:08:44.650 "accel_crypto_keys_get", 00:08:44.650 "accel_crypto_key_create", 00:08:44.650 "accel_assign_opc", 00:08:44.650 "accel_get_module_info", 00:08:44.650 "accel_get_opc_assignments", 00:08:44.650 "notify_get_notifications", 00:08:44.650 "notify_get_types", 00:08:44.650 "bdev_get_histogram", 00:08:44.650 "bdev_enable_histogram", 00:08:44.650 "bdev_set_qos_limit", 00:08:44.650 "bdev_set_qd_sampling_period", 00:08:44.650 "bdev_get_bdevs", 00:08:44.650 "bdev_reset_iostat", 00:08:44.650 "bdev_get_iostat", 00:08:44.650 "bdev_examine", 00:08:44.650 "bdev_wait_for_examine", 00:08:44.650 "bdev_set_options", 00:08:44.650 "scsi_get_devices", 00:08:44.650 "thread_set_cpumask", 00:08:44.650 "framework_get_scheduler", 00:08:44.650 "framework_set_scheduler", 00:08:44.650 "framework_get_reactors", 00:08:44.650 "thread_get_io_channels", 00:08:44.650 "thread_get_pollers", 00:08:44.650 "thread_get_stats", 00:08:44.650 "framework_monitor_context_switch", 00:08:44.650 "spdk_kill_instance", 00:08:44.650 "log_enable_timestamps", 00:08:44.650 "log_get_flags", 00:08:44.650 "log_clear_flag", 00:08:44.650 "log_set_flag", 00:08:44.650 "log_get_level", 00:08:44.650 "log_set_level", 00:08:44.650 "log_get_print_level", 00:08:44.650 "log_set_print_level", 00:08:44.650 "framework_enable_cpumask_locks", 00:08:44.650 "framework_disable_cpumask_locks", 00:08:44.650 "framework_wait_init", 00:08:44.650 "framework_start_init", 00:08:44.650 "virtio_blk_create_transport", 00:08:44.650 "virtio_blk_get_transports", 00:08:44.650 "vhost_controller_set_coalescing", 00:08:44.650 "vhost_get_controllers", 00:08:44.650 "vhost_delete_controller", 00:08:44.650 "vhost_create_blk_controller", 00:08:44.650 "vhost_scsi_controller_remove_target", 00:08:44.650 "vhost_scsi_controller_add_target", 00:08:44.650 "vhost_start_scsi_controller", 00:08:44.650 "vhost_create_scsi_controller", 00:08:44.650 "nbd_get_disks", 00:08:44.650 "nbd_stop_disk", 00:08:44.650 "nbd_start_disk", 00:08:44.650 "env_dpdk_get_mem_stats", 00:08:44.650 "nvmf_subsystem_get_listeners", 00:08:44.650 "nvmf_subsystem_get_qpairs", 00:08:44.650 "nvmf_subsystem_get_controllers", 00:08:44.650 "nvmf_get_stats", 00:08:44.650 "nvmf_get_transports", 00:08:44.650 "nvmf_create_transport", 00:08:44.650 "nvmf_get_targets", 00:08:44.650 "nvmf_delete_target", 00:08:44.650 "nvmf_create_target", 00:08:44.650 "nvmf_subsystem_allow_any_host", 00:08:44.650 "nvmf_subsystem_remove_host", 00:08:44.650 "nvmf_subsystem_add_host", 00:08:44.650 "nvmf_subsystem_remove_ns", 00:08:44.650 "nvmf_subsystem_add_ns", 00:08:44.650 "nvmf_subsystem_listener_set_ana_state", 00:08:44.650 "nvmf_discovery_get_referrals", 00:08:44.650 "nvmf_discovery_remove_referral", 00:08:44.650 "nvmf_discovery_add_referral", 00:08:44.650 "nvmf_subsystem_remove_listener", 00:08:44.650 "nvmf_subsystem_add_listener", 00:08:44.650 "nvmf_delete_subsystem", 00:08:44.650 "nvmf_create_subsystem", 00:08:44.650 "nvmf_get_subsystems", 00:08:44.650 "nvmf_set_crdt", 00:08:44.650 "nvmf_set_config", 00:08:44.650 "nvmf_set_max_subsystems", 00:08:44.650 "iscsi_set_options", 00:08:44.650 "iscsi_get_auth_groups", 00:08:44.650 "iscsi_auth_group_remove_secret", 00:08:44.650 "iscsi_auth_group_add_secret", 00:08:44.650 "iscsi_delete_auth_group", 00:08:44.650 "iscsi_create_auth_group", 00:08:44.650 "iscsi_set_discovery_auth", 00:08:44.650 "iscsi_get_options", 00:08:44.650 "iscsi_target_node_request_logout", 00:08:44.650 "iscsi_target_node_set_redirect", 00:08:44.650 "iscsi_target_node_set_auth", 00:08:44.650 "iscsi_target_node_add_lun", 00:08:44.650 "iscsi_get_connections", 00:08:44.650 "iscsi_portal_group_set_auth", 00:08:44.650 "iscsi_start_portal_group", 00:08:44.650 "iscsi_delete_portal_group", 00:08:44.650 "iscsi_create_portal_group", 00:08:44.650 "iscsi_get_portal_groups", 00:08:44.650 "iscsi_delete_target_node", 00:08:44.650 "iscsi_target_node_remove_pg_ig_maps", 00:08:44.650 "iscsi_target_node_add_pg_ig_maps", 00:08:44.650 "iscsi_create_target_node", 00:08:44.650 "iscsi_get_target_nodes", 00:08:44.650 "iscsi_delete_initiator_group", 00:08:44.650 "iscsi_initiator_group_remove_initiators", 00:08:44.650 "iscsi_initiator_group_add_initiators", 00:08:44.650 "iscsi_create_initiator_group", 00:08:44.650 "iscsi_get_initiator_groups", 00:08:44.650 "iaa_scan_accel_module", 00:08:44.650 "dsa_scan_accel_module", 00:08:44.650 "ioat_scan_accel_module", 00:08:44.650 "accel_error_inject_error", 00:08:44.650 "bdev_iscsi_delete", 00:08:44.650 "bdev_iscsi_create", 00:08:44.650 "bdev_iscsi_set_options", 00:08:44.650 "bdev_virtio_attach_controller", 00:08:44.650 "bdev_virtio_scsi_get_devices", 00:08:44.650 "bdev_virtio_detach_controller", 00:08:44.650 "bdev_virtio_blk_set_hotplug", 00:08:44.650 "bdev_ftl_set_property", 00:08:44.650 "bdev_ftl_get_properties", 00:08:44.650 "bdev_ftl_get_stats", 00:08:44.650 "bdev_ftl_unmap", 00:08:44.650 "bdev_ftl_unload", 00:08:44.650 "bdev_ftl_delete", 00:08:44.650 "bdev_ftl_load", 00:08:44.650 "bdev_ftl_create", 00:08:44.650 "bdev_aio_delete", 00:08:44.650 "bdev_aio_rescan", 00:08:44.650 "bdev_aio_create", 00:08:44.650 "blobfs_create", 00:08:44.650 "blobfs_detect", 00:08:44.650 "blobfs_set_cache_size", 00:08:44.650 "bdev_zone_block_delete", 00:08:44.650 "bdev_zone_block_create", 00:08:44.650 "bdev_delay_delete", 00:08:44.650 "bdev_delay_create", 00:08:44.650 "bdev_delay_update_latency", 00:08:44.650 "bdev_split_delete", 00:08:44.650 "bdev_split_create", 00:08:44.650 "bdev_error_inject_error", 00:08:44.650 "bdev_error_delete", 00:08:44.650 "bdev_error_create", 00:08:44.650 "bdev_raid_set_options", 00:08:44.650 "bdev_raid_remove_base_bdev", 00:08:44.650 "bdev_raid_add_base_bdev", 00:08:44.650 "bdev_raid_delete", 00:08:44.650 "bdev_raid_create", 00:08:44.650 "bdev_raid_get_bdevs", 00:08:44.650 "bdev_lvol_grow_lvstore", 00:08:44.650 "bdev_lvol_get_lvols", 00:08:44.650 "bdev_lvol_get_lvstores", 00:08:44.650 "bdev_lvol_delete", 00:08:44.650 "bdev_lvol_set_read_only", 00:08:44.650 "bdev_lvol_resize", 00:08:44.650 "bdev_lvol_decouple_parent", 00:08:44.650 "bdev_lvol_inflate", 00:08:44.650 "bdev_lvol_rename", 00:08:44.650 "bdev_lvol_clone_bdev", 00:08:44.650 "bdev_lvol_clone", 00:08:44.650 "bdev_lvol_snapshot", 00:08:44.650 "bdev_lvol_create", 00:08:44.650 "bdev_lvol_delete_lvstore", 00:08:44.650 "bdev_lvol_rename_lvstore", 00:08:44.650 "bdev_lvol_create_lvstore", 00:08:44.650 "bdev_passthru_delete", 00:08:44.650 "bdev_passthru_create", 00:08:44.650 "bdev_nvme_cuse_unregister", 00:08:44.650 "bdev_nvme_cuse_register", 00:08:44.650 "bdev_opal_new_user", 00:08:44.650 "bdev_opal_set_lock_state", 00:08:44.650 "bdev_opal_delete", 00:08:44.650 "bdev_opal_get_info", 00:08:44.650 "bdev_opal_create", 00:08:44.650 "bdev_nvme_opal_revert", 00:08:44.650 "bdev_nvme_opal_init", 00:08:44.650 "bdev_nvme_send_cmd", 00:08:44.650 "bdev_nvme_get_path_iostat", 00:08:44.650 "bdev_nvme_get_mdns_discovery_info", 00:08:44.650 "bdev_nvme_stop_mdns_discovery", 00:08:44.650 "bdev_nvme_start_mdns_discovery", 00:08:44.650 "bdev_nvme_set_multipath_policy", 00:08:44.650 "bdev_nvme_set_preferred_path", 00:08:44.650 "bdev_nvme_get_io_paths", 00:08:44.650 "bdev_nvme_remove_error_injection", 00:08:44.650 "bdev_nvme_add_error_injection", 00:08:44.650 "bdev_nvme_get_discovery_info", 00:08:44.650 "bdev_nvme_stop_discovery", 00:08:44.650 "bdev_nvme_start_discovery", 00:08:44.651 "bdev_nvme_get_controller_health_info", 00:08:44.651 "bdev_nvme_disable_controller", 00:08:44.651 "bdev_nvme_enable_controller", 00:08:44.651 "bdev_nvme_reset_controller", 00:08:44.651 "bdev_nvme_get_transport_statistics", 00:08:44.651 "bdev_nvme_apply_firmware", 00:08:44.651 "bdev_nvme_detach_controller", 00:08:44.651 "bdev_nvme_get_controllers", 00:08:44.651 "bdev_nvme_attach_controller", 00:08:44.651 "bdev_nvme_set_hotplug", 00:08:44.651 "bdev_nvme_set_options", 00:08:44.651 "bdev_null_resize", 00:08:44.651 "bdev_null_delete", 00:08:44.651 "bdev_null_create", 00:08:44.651 "bdev_malloc_delete", 00:08:44.651 "bdev_malloc_create" 00:08:44.651 ] 00:08:44.651 16:47:33 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:44.651 16:47:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:44.651 16:47:33 -- common/autotest_common.sh@10 -- # set +x 00:08:44.651 16:47:33 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:44.651 16:47:33 -- spdkcli/tcp.sh@38 -- # killprocess 103905 00:08:44.651 16:47:33 -- common/autotest_common.sh@936 -- # '[' -z 103905 ']' 00:08:44.651 16:47:33 -- common/autotest_common.sh@940 -- # kill -0 103905 00:08:44.651 16:47:33 -- common/autotest_common.sh@941 -- # uname 00:08:44.651 16:47:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:44.651 16:47:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103905 00:08:44.651 16:47:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:44.651 16:47:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:44.651 16:47:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103905' 00:08:44.651 killing process with pid 103905 00:08:44.651 16:47:33 -- common/autotest_common.sh@955 -- # kill 103905 00:08:44.651 16:47:33 -- common/autotest_common.sh@960 -- # wait 103905 00:08:46.554 ************************************ 00:08:46.554 END TEST spdkcli_tcp 00:08:46.554 ************************************ 00:08:46.554 00:08:46.554 real 0m4.039s 00:08:46.554 user 0m7.418s 00:08:46.554 sys 0m0.614s 00:08:46.554 16:47:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:46.554 16:47:35 -- common/autotest_common.sh@10 -- # set +x 00:08:46.554 16:47:35 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:46.554 16:47:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:46.554 16:47:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.554 16:47:35 -- common/autotest_common.sh@10 -- # set +x 00:08:46.554 ************************************ 00:08:46.554 START TEST dpdk_mem_utility 00:08:46.554 ************************************ 00:08:46.554 16:47:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:46.554 * Looking for test storage... 00:08:46.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:46.554 16:47:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:46.554 16:47:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:46.554 16:47:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:46.812 16:47:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:46.812 16:47:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:46.812 16:47:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:46.812 16:47:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:46.812 16:47:35 -- scripts/common.sh@335 -- # IFS=.-: 00:08:46.812 16:47:35 -- scripts/common.sh@335 -- # read -ra ver1 00:08:46.812 16:47:35 -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.812 16:47:35 -- scripts/common.sh@336 -- # read -ra ver2 00:08:46.812 16:47:35 -- scripts/common.sh@337 -- # local 'op=<' 00:08:46.812 16:47:35 -- scripts/common.sh@339 -- # ver1_l=2 00:08:46.812 16:47:35 -- scripts/common.sh@340 -- # ver2_l=1 00:08:46.812 16:47:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:46.812 16:47:35 -- scripts/common.sh@343 -- # case "$op" in 00:08:46.812 16:47:35 -- scripts/common.sh@344 -- # : 1 00:08:46.812 16:47:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:46.812 16:47:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.812 16:47:35 -- scripts/common.sh@364 -- # decimal 1 00:08:46.812 16:47:35 -- scripts/common.sh@352 -- # local d=1 00:08:46.812 16:47:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.812 16:47:35 -- scripts/common.sh@354 -- # echo 1 00:08:46.812 16:47:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:46.812 16:47:35 -- scripts/common.sh@365 -- # decimal 2 00:08:46.812 16:47:35 -- scripts/common.sh@352 -- # local d=2 00:08:46.812 16:47:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.812 16:47:35 -- scripts/common.sh@354 -- # echo 2 00:08:46.812 16:47:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:46.812 16:47:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:46.812 16:47:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:46.812 16:47:35 -- scripts/common.sh@367 -- # return 0 00:08:46.812 16:47:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.812 16:47:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:46.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.812 --rc genhtml_branch_coverage=1 00:08:46.812 --rc genhtml_function_coverage=1 00:08:46.812 --rc genhtml_legend=1 00:08:46.812 --rc geninfo_all_blocks=1 00:08:46.812 --rc geninfo_unexecuted_blocks=1 00:08:46.812 00:08:46.812 ' 00:08:46.812 16:47:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:46.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.812 --rc genhtml_branch_coverage=1 00:08:46.812 --rc genhtml_function_coverage=1 00:08:46.812 --rc genhtml_legend=1 00:08:46.812 --rc geninfo_all_blocks=1 00:08:46.812 --rc geninfo_unexecuted_blocks=1 00:08:46.812 00:08:46.812 ' 00:08:46.812 16:47:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:46.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.812 --rc genhtml_branch_coverage=1 00:08:46.812 --rc genhtml_function_coverage=1 00:08:46.812 --rc genhtml_legend=1 00:08:46.812 --rc geninfo_all_blocks=1 00:08:46.812 --rc geninfo_unexecuted_blocks=1 00:08:46.812 00:08:46.812 ' 00:08:46.812 16:47:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:46.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.813 --rc genhtml_branch_coverage=1 00:08:46.813 --rc genhtml_function_coverage=1 00:08:46.813 --rc genhtml_legend=1 00:08:46.813 --rc geninfo_all_blocks=1 00:08:46.813 --rc geninfo_unexecuted_blocks=1 00:08:46.813 00:08:46.813 ' 00:08:46.813 16:47:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:46.813 16:47:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=104041 00:08:46.813 16:47:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 104041 00:08:46.813 16:47:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:46.813 16:47:35 -- common/autotest_common.sh@829 -- # '[' -z 104041 ']' 00:08:46.813 16:47:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.813 16:47:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.813 16:47:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.813 16:47:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.813 16:47:35 -- common/autotest_common.sh@10 -- # set +x 00:08:46.813 [2024-11-05 16:47:35.539769] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:46.813 [2024-11-05 16:47:35.539965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104041 ] 00:08:47.101 [2024-11-05 16:47:35.706819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.101 [2024-11-05 16:47:35.884684] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:47.101 [2024-11-05 16:47:35.884933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.484 16:47:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:48.484 16:47:37 -- common/autotest_common.sh@862 -- # return 0 00:08:48.484 16:47:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:48.484 16:47:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:48.484 16:47:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.484 16:47:37 -- common/autotest_common.sh@10 -- # set +x 00:08:48.484 { 00:08:48.484 "filename": "/tmp/spdk_mem_dump.txt" 00:08:48.484 } 00:08:48.484 16:47:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.484 16:47:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:48.484 DPDK memory size 820.000000 MiB in 1 heap(s) 00:08:48.484 1 heaps totaling size 820.000000 MiB 00:08:48.484 size: 820.000000 MiB heap id: 0 00:08:48.484 end heaps---------- 00:08:48.484 8 mempools totaling size 598.116089 MiB 00:08:48.484 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:48.484 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:48.484 size: 84.521057 MiB name: bdev_io_104041 00:08:48.484 size: 51.011292 MiB name: evtpool_104041 00:08:48.484 size: 50.003479 MiB name: msgpool_104041 00:08:48.484 size: 21.763794 MiB name: PDU_Pool 00:08:48.484 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:48.484 size: 0.026123 MiB name: Session_Pool 00:08:48.484 end mempools------- 00:08:48.484 6 memzones totaling size 4.142822 MiB 00:08:48.484 size: 1.000366 MiB name: RG_ring_0_104041 00:08:48.484 size: 1.000366 MiB name: RG_ring_1_104041 00:08:48.484 size: 1.000366 MiB name: RG_ring_4_104041 00:08:48.484 size: 1.000366 MiB name: RG_ring_5_104041 00:08:48.484 size: 0.125366 MiB name: RG_ring_2_104041 00:08:48.484 size: 0.015991 MiB name: RG_ring_3_104041 00:08:48.484 end memzones------- 00:08:48.484 16:47:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:48.484 heap id: 0 total size: 820.000000 MiB number of busy elements: 222 number of free elements: 18 00:08:48.484 list of free elements. size: 18.470703 MiB 00:08:48.484 element at address: 0x200000400000 with size: 1.999451 MiB 00:08:48.484 element at address: 0x200000800000 with size: 1.996887 MiB 00:08:48.484 element at address: 0x200007000000 with size: 1.995972 MiB 00:08:48.484 element at address: 0x20000b200000 with size: 1.995972 MiB 00:08:48.484 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:48.484 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:48.484 element at address: 0x200019600000 with size: 0.999329 MiB 00:08:48.484 element at address: 0x200003e00000 with size: 0.996094 MiB 00:08:48.484 element at address: 0x200032200000 with size: 0.994324 MiB 00:08:48.484 element at address: 0x200018e00000 with size: 0.959656 MiB 00:08:48.484 element at address: 0x200019900040 with size: 0.937256 MiB 00:08:48.484 element at address: 0x200000200000 with size: 0.835083 MiB 00:08:48.484 element at address: 0x20001b000000 with size: 0.561951 MiB 00:08:48.484 element at address: 0x200019200000 with size: 0.489197 MiB 00:08:48.484 element at address: 0x200019a00000 with size: 0.485413 MiB 00:08:48.484 element at address: 0x200013800000 with size: 0.468140 MiB 00:08:48.484 element at address: 0x200028400000 with size: 0.399963 MiB 00:08:48.484 element at address: 0x200003a00000 with size: 0.356140 MiB 00:08:48.484 list of standard malloc elements. size: 199.264893 MiB 00:08:48.484 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:08:48.484 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:08:48.484 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:48.484 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:48.484 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:48.484 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:48.484 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:08:48.484 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:48.484 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:08:48.484 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:08:48.484 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:08:48.484 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:48.484 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:48.484 element at address: 0x200003aff980 with size: 0.000244 MiB 00:08:48.484 element at address: 0x200003affa80 with size: 0.000244 MiB 00:08:48.484 element at address: 0x200003eff000 with size: 0.000244 MiB 00:08:48.484 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:08:48.484 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:08:48.484 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:08:48.484 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:08:48.484 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:08:48.484 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:08:48.484 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:08:48.484 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:08:48.485 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:08:48.485 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:08:48.485 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:08:48.485 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:08:48.485 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:08:48.485 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:08:48.485 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:08:48.485 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:08:48.485 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:08:48.485 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:08:48.485 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:08:48.485 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:08:48.485 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:08:48.485 element at address: 0x200013877d80 with size: 0.000244 MiB 00:08:48.485 element at address: 0x200013877e80 with size: 0.000244 MiB 00:08:48.485 element at address: 0x200013877f80 with size: 0.000244 MiB 00:08:48.485 element at address: 0x200013878080 with size: 0.000244 MiB 00:08:48.485 element at address: 0x200013878180 with size: 0.000244 MiB 00:08:48.485 element at address: 0x200013878280 with size: 0.000244 MiB 00:08:48.485 element at address: 0x200013878380 with size: 0.000244 MiB 00:08:48.485 element at address: 0x200013878480 with size: 0.000244 MiB 00:08:48.485 element at address: 0x200013878580 with size: 0.000244 MiB 00:08:48.485 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:48.485 element at address: 0x200019abc680 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:08:48.485 element at address: 0x200028466640 with size: 0.000244 MiB 00:08:48.485 element at address: 0x200028466740 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846d400 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846d680 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846d780 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846d880 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846d980 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846da80 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846db80 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846de80 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846df80 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846e080 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846e180 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846e280 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846e380 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846e480 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846e580 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846e680 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846e780 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846e880 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846e980 with size: 0.000244 MiB 00:08:48.485 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846f080 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846f180 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846f280 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846f380 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846f480 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846f580 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846f680 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846f780 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846f880 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846f980 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:08:48.486 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:08:48.486 list of memzone associated elements. size: 602.264404 MiB 00:08:48.486 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:08:48.486 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:48.486 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:08:48.486 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:48.486 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:08:48.486 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_104041_0 00:08:48.486 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:08:48.486 associated memzone info: size: 48.002930 MiB name: MP_evtpool_104041_0 00:08:48.486 element at address: 0x200003fff340 with size: 48.003113 MiB 00:08:48.486 associated memzone info: size: 48.002930 MiB name: MP_msgpool_104041_0 00:08:48.486 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:08:48.486 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:48.486 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:08:48.486 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:48.486 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:08:48.486 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_104041 00:08:48.486 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:08:48.486 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_104041 00:08:48.486 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:48.486 associated memzone info: size: 1.007996 MiB name: MP_evtpool_104041 00:08:48.486 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:48.486 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:48.486 element at address: 0x200019abc780 with size: 1.008179 MiB 00:08:48.486 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:48.486 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:48.486 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:48.486 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:08:48.486 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:48.486 element at address: 0x200003eff100 with size: 1.000549 MiB 00:08:48.486 associated memzone info: size: 1.000366 MiB name: RG_ring_0_104041 00:08:48.486 element at address: 0x200003affb80 with size: 1.000549 MiB 00:08:48.486 associated memzone info: size: 1.000366 MiB name: RG_ring_1_104041 00:08:48.486 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:08:48.486 associated memzone info: size: 1.000366 MiB name: RG_ring_4_104041 00:08:48.486 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:08:48.486 associated memzone info: size: 1.000366 MiB name: RG_ring_5_104041 00:08:48.486 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:08:48.486 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_104041 00:08:48.486 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:08:48.486 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:48.486 element at address: 0x200013878680 with size: 0.500549 MiB 00:08:48.486 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:48.486 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:08:48.486 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:48.486 element at address: 0x200003adf740 with size: 0.125549 MiB 00:08:48.486 associated memzone info: size: 0.125366 MiB name: RG_ring_2_104041 00:08:48.486 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:08:48.486 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:48.486 element at address: 0x200028466840 with size: 0.023804 MiB 00:08:48.486 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:48.486 element at address: 0x200003adb500 with size: 0.016174 MiB 00:08:48.486 associated memzone info: size: 0.015991 MiB name: RG_ring_3_104041 00:08:48.486 element at address: 0x20002846c9c0 with size: 0.002502 MiB 00:08:48.486 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:48.486 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:08:48.486 associated memzone info: size: 0.000183 MiB name: MP_msgpool_104041 00:08:48.486 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:08:48.486 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_104041 00:08:48.486 element at address: 0x20002846d500 with size: 0.000366 MiB 00:08:48.486 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:48.486 16:47:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:48.486 16:47:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 104041 00:08:48.486 16:47:37 -- common/autotest_common.sh@936 -- # '[' -z 104041 ']' 00:08:48.486 16:47:37 -- common/autotest_common.sh@940 -- # kill -0 104041 00:08:48.486 16:47:37 -- common/autotest_common.sh@941 -- # uname 00:08:48.486 16:47:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:48.486 16:47:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104041 00:08:48.486 16:47:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:48.486 16:47:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:48.486 killing process with pid 104041 00:08:48.486 16:47:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104041' 00:08:48.486 16:47:37 -- common/autotest_common.sh@955 -- # kill 104041 00:08:48.486 16:47:37 -- common/autotest_common.sh@960 -- # wait 104041 00:08:50.390 00:08:50.390 real 0m3.746s 00:08:50.390 user 0m3.883s 00:08:50.390 sys 0m0.558s 00:08:50.390 16:47:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.390 16:47:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.390 ************************************ 00:08:50.390 END TEST dpdk_mem_utility 00:08:50.390 ************************************ 00:08:50.390 16:47:39 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:50.390 16:47:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:50.390 16:47:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.390 16:47:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.390 ************************************ 00:08:50.390 START TEST event 00:08:50.390 ************************************ 00:08:50.390 16:47:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:50.390 * Looking for test storage... 00:08:50.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:50.390 16:47:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:50.390 16:47:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:50.390 16:47:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:50.390 16:47:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:50.390 16:47:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:50.390 16:47:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:50.390 16:47:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:50.390 16:47:39 -- scripts/common.sh@335 -- # IFS=.-: 00:08:50.390 16:47:39 -- scripts/common.sh@335 -- # read -ra ver1 00:08:50.390 16:47:39 -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.390 16:47:39 -- scripts/common.sh@336 -- # read -ra ver2 00:08:50.390 16:47:39 -- scripts/common.sh@337 -- # local 'op=<' 00:08:50.390 16:47:39 -- scripts/common.sh@339 -- # ver1_l=2 00:08:50.390 16:47:39 -- scripts/common.sh@340 -- # ver2_l=1 00:08:50.390 16:47:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:50.390 16:47:39 -- scripts/common.sh@343 -- # case "$op" in 00:08:50.390 16:47:39 -- scripts/common.sh@344 -- # : 1 00:08:50.390 16:47:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:50.390 16:47:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.390 16:47:39 -- scripts/common.sh@364 -- # decimal 1 00:08:50.390 16:47:39 -- scripts/common.sh@352 -- # local d=1 00:08:50.390 16:47:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.390 16:47:39 -- scripts/common.sh@354 -- # echo 1 00:08:50.390 16:47:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:50.390 16:47:39 -- scripts/common.sh@365 -- # decimal 2 00:08:50.390 16:47:39 -- scripts/common.sh@352 -- # local d=2 00:08:50.390 16:47:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.390 16:47:39 -- scripts/common.sh@354 -- # echo 2 00:08:50.390 16:47:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:50.390 16:47:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:50.390 16:47:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:50.390 16:47:39 -- scripts/common.sh@367 -- # return 0 00:08:50.390 16:47:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.390 16:47:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:50.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.390 --rc genhtml_branch_coverage=1 00:08:50.390 --rc genhtml_function_coverage=1 00:08:50.390 --rc genhtml_legend=1 00:08:50.390 --rc geninfo_all_blocks=1 00:08:50.390 --rc geninfo_unexecuted_blocks=1 00:08:50.390 00:08:50.390 ' 00:08:50.390 16:47:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:50.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.390 --rc genhtml_branch_coverage=1 00:08:50.390 --rc genhtml_function_coverage=1 00:08:50.390 --rc genhtml_legend=1 00:08:50.390 --rc geninfo_all_blocks=1 00:08:50.390 --rc geninfo_unexecuted_blocks=1 00:08:50.390 00:08:50.390 ' 00:08:50.390 16:47:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:50.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.390 --rc genhtml_branch_coverage=1 00:08:50.390 --rc genhtml_function_coverage=1 00:08:50.390 --rc genhtml_legend=1 00:08:50.390 --rc geninfo_all_blocks=1 00:08:50.390 --rc geninfo_unexecuted_blocks=1 00:08:50.390 00:08:50.390 ' 00:08:50.390 16:47:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:50.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.390 --rc genhtml_branch_coverage=1 00:08:50.390 --rc genhtml_function_coverage=1 00:08:50.390 --rc genhtml_legend=1 00:08:50.390 --rc geninfo_all_blocks=1 00:08:50.390 --rc geninfo_unexecuted_blocks=1 00:08:50.390 00:08:50.390 ' 00:08:50.390 16:47:39 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:50.390 16:47:39 -- bdev/nbd_common.sh@6 -- # set -e 00:08:50.390 16:47:39 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:50.390 16:47:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:50.390 16:47:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.390 16:47:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.390 ************************************ 00:08:50.390 START TEST event_perf 00:08:50.390 ************************************ 00:08:50.390 16:47:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:50.650 Running I/O for 1 seconds...[2024-11-05 16:47:39.311167] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:50.650 [2024-11-05 16:47:39.311353] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104165 ] 00:08:50.650 [2024-11-05 16:47:39.496516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.908 [2024-11-05 16:47:39.656311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.908 [2024-11-05 16:47:39.656470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.908 [2024-11-05 16:47:39.656595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.908 [2024-11-05 16:47:39.656598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.284 Running I/O for 1 seconds... 00:08:52.284 lcore 0: 218485 00:08:52.284 lcore 1: 218484 00:08:52.284 lcore 2: 218484 00:08:52.284 lcore 3: 218483 00:08:52.284 done. 00:08:52.284 00:08:52.284 real 0m1.702s 00:08:52.284 user 0m4.468s 00:08:52.284 sys 0m0.140s 00:08:52.284 16:47:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:52.284 16:47:40 -- common/autotest_common.sh@10 -- # set +x 00:08:52.284 ************************************ 00:08:52.284 END TEST event_perf 00:08:52.284 ************************************ 00:08:52.284 16:47:41 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:52.284 16:47:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:52.284 16:47:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:52.284 16:47:41 -- common/autotest_common.sh@10 -- # set +x 00:08:52.284 ************************************ 00:08:52.284 START TEST event_reactor 00:08:52.284 ************************************ 00:08:52.284 16:47:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:52.284 [2024-11-05 16:47:41.052821] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:52.284 [2024-11-05 16:47:41.052989] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104206 ] 00:08:52.573 [2024-11-05 16:47:41.205520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.573 [2024-11-05 16:47:41.370772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.958 test_start 00:08:53.958 oneshot 00:08:53.959 tick 100 00:08:53.959 tick 100 00:08:53.959 tick 250 00:08:53.959 tick 100 00:08:53.959 tick 100 00:08:53.959 tick 100 00:08:53.959 tick 250 00:08:53.959 tick 500 00:08:53.959 tick 100 00:08:53.959 tick 100 00:08:53.959 tick 250 00:08:53.959 tick 100 00:08:53.959 tick 100 00:08:53.959 test_end 00:08:53.959 00:08:53.959 real 0m1.667s 00:08:53.959 user 0m1.445s 00:08:53.959 sys 0m0.121s 00:08:53.959 16:47:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:53.959 16:47:42 -- common/autotest_common.sh@10 -- # set +x 00:08:53.959 ************************************ 00:08:53.959 END TEST event_reactor 00:08:53.959 ************************************ 00:08:53.959 16:47:42 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:53.959 16:47:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:53.959 16:47:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.959 16:47:42 -- common/autotest_common.sh@10 -- # set +x 00:08:53.959 ************************************ 00:08:53.959 START TEST event_reactor_perf 00:08:53.959 ************************************ 00:08:53.959 16:47:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:53.959 [2024-11-05 16:47:42.783744] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:53.959 [2024-11-05 16:47:42.783951] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104256 ] 00:08:54.218 [2024-11-05 16:47:42.951980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.475 [2024-11-05 16:47:43.106777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.850 test_start 00:08:55.850 test_end 00:08:55.850 Performance: 401554 events per second 00:08:55.850 00:08:55.850 real 0m1.694s 00:08:55.850 user 0m1.466s 00:08:55.850 sys 0m0.128s 00:08:55.850 16:47:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.850 16:47:44 -- common/autotest_common.sh@10 -- # set +x 00:08:55.850 ************************************ 00:08:55.850 END TEST event_reactor_perf 00:08:55.850 ************************************ 00:08:55.850 16:47:44 -- event/event.sh@49 -- # uname -s 00:08:55.850 16:47:44 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:55.850 16:47:44 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:55.850 16:47:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:55.850 16:47:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.850 16:47:44 -- common/autotest_common.sh@10 -- # set +x 00:08:55.850 ************************************ 00:08:55.850 START TEST event_scheduler 00:08:55.850 ************************************ 00:08:55.850 16:47:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:55.850 * Looking for test storage... 00:08:55.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:55.850 16:47:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:55.850 16:47:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:55.850 16:47:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:55.850 16:47:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:55.850 16:47:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:55.850 16:47:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:55.850 16:47:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:55.850 16:47:44 -- scripts/common.sh@335 -- # IFS=.-: 00:08:55.850 16:47:44 -- scripts/common.sh@335 -- # read -ra ver1 00:08:55.850 16:47:44 -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.850 16:47:44 -- scripts/common.sh@336 -- # read -ra ver2 00:08:55.850 16:47:44 -- scripts/common.sh@337 -- # local 'op=<' 00:08:55.850 16:47:44 -- scripts/common.sh@339 -- # ver1_l=2 00:08:55.850 16:47:44 -- scripts/common.sh@340 -- # ver2_l=1 00:08:55.850 16:47:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:55.850 16:47:44 -- scripts/common.sh@343 -- # case "$op" in 00:08:55.850 16:47:44 -- scripts/common.sh@344 -- # : 1 00:08:55.850 16:47:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:55.850 16:47:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.850 16:47:44 -- scripts/common.sh@364 -- # decimal 1 00:08:55.850 16:47:44 -- scripts/common.sh@352 -- # local d=1 00:08:55.850 16:47:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.850 16:47:44 -- scripts/common.sh@354 -- # echo 1 00:08:55.850 16:47:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:55.850 16:47:44 -- scripts/common.sh@365 -- # decimal 2 00:08:55.850 16:47:44 -- scripts/common.sh@352 -- # local d=2 00:08:55.850 16:47:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.850 16:47:44 -- scripts/common.sh@354 -- # echo 2 00:08:55.850 16:47:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:55.850 16:47:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:55.850 16:47:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:55.850 16:47:44 -- scripts/common.sh@367 -- # return 0 00:08:55.850 16:47:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.850 16:47:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:55.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.850 --rc genhtml_branch_coverage=1 00:08:55.850 --rc genhtml_function_coverage=1 00:08:55.850 --rc genhtml_legend=1 00:08:55.850 --rc geninfo_all_blocks=1 00:08:55.850 --rc geninfo_unexecuted_blocks=1 00:08:55.850 00:08:55.850 ' 00:08:55.850 16:47:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:55.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.850 --rc genhtml_branch_coverage=1 00:08:55.850 --rc genhtml_function_coverage=1 00:08:55.850 --rc genhtml_legend=1 00:08:55.850 --rc geninfo_all_blocks=1 00:08:55.850 --rc geninfo_unexecuted_blocks=1 00:08:55.850 00:08:55.850 ' 00:08:55.850 16:47:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:55.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.850 --rc genhtml_branch_coverage=1 00:08:55.850 --rc genhtml_function_coverage=1 00:08:55.850 --rc genhtml_legend=1 00:08:55.850 --rc geninfo_all_blocks=1 00:08:55.850 --rc geninfo_unexecuted_blocks=1 00:08:55.850 00:08:55.850 ' 00:08:55.850 16:47:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:55.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.850 --rc genhtml_branch_coverage=1 00:08:55.850 --rc genhtml_function_coverage=1 00:08:55.850 --rc genhtml_legend=1 00:08:55.850 --rc geninfo_all_blocks=1 00:08:55.850 --rc geninfo_unexecuted_blocks=1 00:08:55.850 00:08:55.850 ' 00:08:55.850 16:47:44 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:55.850 16:47:44 -- scheduler/scheduler.sh@35 -- # scheduler_pid=104340 00:08:55.851 16:47:44 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:55.851 16:47:44 -- scheduler/scheduler.sh@37 -- # waitforlisten 104340 00:08:55.851 16:47:44 -- common/autotest_common.sh@829 -- # '[' -z 104340 ']' 00:08:55.851 16:47:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.851 16:47:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:55.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.851 16:47:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.851 16:47:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:55.851 16:47:44 -- common/autotest_common.sh@10 -- # set +x 00:08:55.851 16:47:44 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:56.109 [2024-11-05 16:47:44.745546] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:56.109 [2024-11-05 16:47:44.745750] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104340 ] 00:08:56.109 [2024-11-05 16:47:44.938688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.368 [2024-11-05 16:47:45.164258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.368 [2024-11-05 16:47:45.164382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.368 [2024-11-05 16:47:45.164510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.368 [2024-11-05 16:47:45.164514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.937 16:47:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:56.937 16:47:45 -- common/autotest_common.sh@862 -- # return 0 00:08:56.937 16:47:45 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:56.937 16:47:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.937 16:47:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.937 POWER: Env isn't set yet! 00:08:56.937 POWER: Attempting to initialise ACPI cpufreq power management... 00:08:56.937 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:56.937 POWER: Cannot set governor of lcore 0 to userspace 00:08:56.937 POWER: Attempting to initialise PSTAT power management... 00:08:56.937 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:56.937 POWER: Cannot set governor of lcore 0 to performance 00:08:56.937 POWER: Attempting to initialise AMD PSTATE power management... 00:08:56.937 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:56.937 POWER: Cannot set governor of lcore 0 to userspace 00:08:56.937 POWER: Attempting to initialise CPPC power management... 00:08:56.937 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:56.937 POWER: Cannot set governor of lcore 0 to userspace 00:08:56.937 POWER: Attempting to initialise VM power management... 00:08:56.937 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:56.937 POWER: Unable to set Power Management Environment for lcore 0 00:08:56.937 [2024-11-05 16:47:45.643138] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:08:56.937 [2024-11-05 16:47:45.643189] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:08:56.937 [2024-11-05 16:47:45.643210] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:08:56.937 [2024-11-05 16:47:45.643270] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:56.937 [2024-11-05 16:47:45.643325] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:56.937 [2024-11-05 16:47:45.643367] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:56.937 16:47:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.937 16:47:45 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:56.937 16:47:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.937 16:47:45 -- common/autotest_common.sh@10 -- # set +x 00:08:57.196 [2024-11-05 16:47:45.906371] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:57.196 16:47:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.196 16:47:45 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:57.196 16:47:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:57.196 16:47:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.196 16:47:45 -- common/autotest_common.sh@10 -- # set +x 00:08:57.196 ************************************ 00:08:57.196 START TEST scheduler_create_thread 00:08:57.196 ************************************ 00:08:57.196 16:47:45 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:08:57.196 16:47:45 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:57.196 16:47:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.196 16:47:45 -- common/autotest_common.sh@10 -- # set +x 00:08:57.196 2 00:08:57.196 16:47:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.196 16:47:45 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:57.196 16:47:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.196 16:47:45 -- common/autotest_common.sh@10 -- # set +x 00:08:57.196 3 00:08:57.196 16:47:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.196 16:47:45 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:57.196 16:47:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.196 16:47:45 -- common/autotest_common.sh@10 -- # set +x 00:08:57.196 4 00:08:57.196 16:47:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.196 16:47:45 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:57.196 16:47:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.196 16:47:45 -- common/autotest_common.sh@10 -- # set +x 00:08:57.196 5 00:08:57.196 16:47:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.196 16:47:45 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:57.196 16:47:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.196 16:47:45 -- common/autotest_common.sh@10 -- # set +x 00:08:57.196 6 00:08:57.196 16:47:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.196 16:47:45 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:57.196 16:47:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.196 16:47:45 -- common/autotest_common.sh@10 -- # set +x 00:08:57.196 7 00:08:57.196 16:47:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.196 16:47:45 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:57.196 16:47:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.196 16:47:45 -- common/autotest_common.sh@10 -- # set +x 00:08:57.196 8 00:08:57.196 16:47:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.196 16:47:45 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:57.196 16:47:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.196 16:47:45 -- common/autotest_common.sh@10 -- # set +x 00:08:57.196 9 00:08:57.196 16:47:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.196 16:47:45 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:57.196 16:47:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.196 16:47:45 -- common/autotest_common.sh@10 -- # set +x 00:08:57.196 10 00:08:57.196 16:47:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.196 16:47:45 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:57.196 16:47:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.196 16:47:45 -- common/autotest_common.sh@10 -- # set +x 00:08:57.196 16:47:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.196 16:47:46 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:57.196 16:47:46 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:57.196 16:47:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.196 16:47:46 -- common/autotest_common.sh@10 -- # set +x 00:08:57.196 16:47:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.196 16:47:46 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:57.196 16:47:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.196 16:47:46 -- common/autotest_common.sh@10 -- # set +x 00:08:58.159 16:47:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.159 16:47:47 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:58.159 16:47:47 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:58.159 16:47:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.159 16:47:47 -- common/autotest_common.sh@10 -- # set +x 00:08:59.536 16:47:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.536 00:08:59.536 real 0m2.152s 00:08:59.536 user 0m0.018s 00:08:59.536 sys 0m0.000s 00:08:59.536 16:47:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.536 16:47:48 -- common/autotest_common.sh@10 -- # set +x 00:08:59.536 ************************************ 00:08:59.536 END TEST scheduler_create_thread 00:08:59.536 ************************************ 00:08:59.536 16:47:48 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:59.536 16:47:48 -- scheduler/scheduler.sh@46 -- # killprocess 104340 00:08:59.536 16:47:48 -- common/autotest_common.sh@936 -- # '[' -z 104340 ']' 00:08:59.536 16:47:48 -- common/autotest_common.sh@940 -- # kill -0 104340 00:08:59.536 16:47:48 -- common/autotest_common.sh@941 -- # uname 00:08:59.536 16:47:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:59.536 16:47:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104340 00:08:59.536 16:47:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:08:59.536 16:47:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:08:59.536 killing process with pid 104340 00:08:59.536 16:47:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104340' 00:08:59.536 16:47:48 -- common/autotest_common.sh@955 -- # kill 104340 00:08:59.536 16:47:48 -- common/autotest_common.sh@960 -- # wait 104340 00:08:59.795 [2024-11-05 16:47:48.553496] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:00.730 00:09:00.730 real 0m5.073s 00:09:00.730 user 0m8.244s 00:09:00.730 sys 0m0.386s 00:09:00.730 16:47:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:00.730 16:47:49 -- common/autotest_common.sh@10 -- # set +x 00:09:00.730 ************************************ 00:09:00.730 END TEST event_scheduler 00:09:00.730 ************************************ 00:09:00.730 16:47:49 -- event/event.sh@51 -- # modprobe -n nbd 00:09:00.730 16:47:49 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:00.730 16:47:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:00.730 16:47:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.730 16:47:49 -- common/autotest_common.sh@10 -- # set +x 00:09:00.994 ************************************ 00:09:00.994 START TEST app_repeat 00:09:00.994 ************************************ 00:09:00.994 16:47:49 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:09:00.994 16:47:49 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.994 16:47:49 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:00.994 16:47:49 -- event/event.sh@13 -- # local nbd_list 00:09:00.994 16:47:49 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:00.994 16:47:49 -- event/event.sh@14 -- # local bdev_list 00:09:00.994 16:47:49 -- event/event.sh@15 -- # local repeat_times=4 00:09:00.994 16:47:49 -- event/event.sh@17 -- # modprobe nbd 00:09:00.994 16:47:49 -- event/event.sh@19 -- # repeat_pid=104460 00:09:00.994 16:47:49 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:00.994 16:47:49 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:00.994 Process app_repeat pid: 104460 00:09:00.994 16:47:49 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 104460' 00:09:00.994 16:47:49 -- event/event.sh@23 -- # for i in {0..2} 00:09:00.994 spdk_app_start Round 0 00:09:00.994 16:47:49 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:00.994 16:47:49 -- event/event.sh@25 -- # waitforlisten 104460 /var/tmp/spdk-nbd.sock 00:09:00.994 16:47:49 -- common/autotest_common.sh@829 -- # '[' -z 104460 ']' 00:09:00.994 16:47:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:00.994 16:47:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:00.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:00.994 16:47:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:00.994 16:47:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:00.994 16:47:49 -- common/autotest_common.sh@10 -- # set +x 00:09:00.994 [2024-11-05 16:47:49.680326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:00.994 [2024-11-05 16:47:49.680569] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104460 ] 00:09:00.994 [2024-11-05 16:47:49.850500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:01.252 [2024-11-05 16:47:50.032855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.252 [2024-11-05 16:47:50.032861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.819 16:47:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.819 16:47:50 -- common/autotest_common.sh@862 -- # return 0 00:09:01.819 16:47:50 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:02.387 Malloc0 00:09:02.387 16:47:51 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:02.645 Malloc1 00:09:02.645 16:47:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:02.645 16:47:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.645 16:47:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:02.645 16:47:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:02.646 16:47:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:02.646 16:47:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:02.646 16:47:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:02.646 16:47:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.646 16:47:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:02.646 16:47:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:02.646 16:47:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:02.646 16:47:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:02.646 16:47:51 -- bdev/nbd_common.sh@12 -- # local i 00:09:02.646 16:47:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:02.646 16:47:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:02.646 16:47:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:02.905 /dev/nbd0 00:09:02.905 16:47:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:02.905 16:47:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:02.905 16:47:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:02.905 16:47:51 -- common/autotest_common.sh@867 -- # local i 00:09:02.905 16:47:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:02.905 16:47:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:02.905 16:47:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:02.905 16:47:51 -- common/autotest_common.sh@871 -- # break 00:09:02.905 16:47:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:02.905 16:47:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:02.905 16:47:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:02.905 1+0 records in 00:09:02.905 1+0 records out 00:09:02.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315982 s, 13.0 MB/s 00:09:02.905 16:47:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:02.905 16:47:51 -- common/autotest_common.sh@884 -- # size=4096 00:09:02.905 16:47:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:02.905 16:47:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:02.905 16:47:51 -- common/autotest_common.sh@887 -- # return 0 00:09:02.905 16:47:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.905 16:47:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:02.905 16:47:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:03.165 /dev/nbd1 00:09:03.165 16:47:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:03.165 16:47:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:03.165 16:47:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:03.165 16:47:51 -- common/autotest_common.sh@867 -- # local i 00:09:03.165 16:47:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:03.165 16:47:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:03.165 16:47:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:03.165 16:47:51 -- common/autotest_common.sh@871 -- # break 00:09:03.165 16:47:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:03.165 16:47:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:03.165 16:47:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:03.165 1+0 records in 00:09:03.165 1+0 records out 00:09:03.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292766 s, 14.0 MB/s 00:09:03.166 16:47:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:03.166 16:47:51 -- common/autotest_common.sh@884 -- # size=4096 00:09:03.166 16:47:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:03.166 16:47:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:03.166 16:47:51 -- common/autotest_common.sh@887 -- # return 0 00:09:03.166 16:47:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:03.166 16:47:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:03.166 16:47:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:03.166 16:47:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.166 16:47:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:03.427 { 00:09:03.427 "nbd_device": "/dev/nbd0", 00:09:03.427 "bdev_name": "Malloc0" 00:09:03.427 }, 00:09:03.427 { 00:09:03.427 "nbd_device": "/dev/nbd1", 00:09:03.427 "bdev_name": "Malloc1" 00:09:03.427 } 00:09:03.427 ]' 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:03.427 { 00:09:03.427 "nbd_device": "/dev/nbd0", 00:09:03.427 "bdev_name": "Malloc0" 00:09:03.427 }, 00:09:03.427 { 00:09:03.427 "nbd_device": "/dev/nbd1", 00:09:03.427 "bdev_name": "Malloc1" 00:09:03.427 } 00:09:03.427 ]' 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:03.427 /dev/nbd1' 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:03.427 /dev/nbd1' 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@65 -- # count=2 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@95 -- # count=2 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:03.427 256+0 records in 00:09:03.427 256+0 records out 00:09:03.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010968 s, 95.6 MB/s 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:03.427 256+0 records in 00:09:03.427 256+0 records out 00:09:03.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217038 s, 48.3 MB/s 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:03.427 256+0 records in 00:09:03.427 256+0 records out 00:09:03.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0357525 s, 29.3 MB/s 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@51 -- # local i 00:09:03.427 16:47:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.428 16:47:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:03.686 16:47:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:03.686 16:47:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:03.686 16:47:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:03.686 16:47:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.686 16:47:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.686 16:47:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:03.686 16:47:52 -- bdev/nbd_common.sh@41 -- # break 00:09:03.686 16:47:52 -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.686 16:47:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.686 16:47:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:03.945 16:47:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:03.945 16:47:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:03.945 16:47:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:03.945 16:47:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.945 16:47:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.945 16:47:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:03.945 16:47:52 -- bdev/nbd_common.sh@41 -- # break 00:09:03.945 16:47:52 -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.945 16:47:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:03.945 16:47:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.945 16:47:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:04.203 16:47:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:04.203 16:47:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:04.203 16:47:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:04.203 16:47:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:04.203 16:47:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:04.203 16:47:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:04.203 16:47:53 -- bdev/nbd_common.sh@65 -- # true 00:09:04.203 16:47:53 -- bdev/nbd_common.sh@65 -- # count=0 00:09:04.203 16:47:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:04.203 16:47:53 -- bdev/nbd_common.sh@104 -- # count=0 00:09:04.203 16:47:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:04.203 16:47:53 -- bdev/nbd_common.sh@109 -- # return 0 00:09:04.203 16:47:53 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:04.771 16:47:53 -- event/event.sh@35 -- # sleep 3 00:09:05.706 [2024-11-05 16:47:54.407658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:05.706 [2024-11-05 16:47:54.578985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.706 [2024-11-05 16:47:54.578997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.965 [2024-11-05 16:47:54.747757] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:05.965 [2024-11-05 16:47:54.747917] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:07.868 16:47:56 -- event/event.sh@23 -- # for i in {0..2} 00:09:07.868 16:47:56 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:07.868 spdk_app_start Round 1 00:09:07.868 16:47:56 -- event/event.sh@25 -- # waitforlisten 104460 /var/tmp/spdk-nbd.sock 00:09:07.868 16:47:56 -- common/autotest_common.sh@829 -- # '[' -z 104460 ']' 00:09:07.868 16:47:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:07.868 16:47:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:07.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:07.868 16:47:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:07.868 16:47:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:07.868 16:47:56 -- common/autotest_common.sh@10 -- # set +x 00:09:07.868 16:47:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:07.868 16:47:56 -- common/autotest_common.sh@862 -- # return 0 00:09:07.868 16:47:56 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:08.128 Malloc0 00:09:08.128 16:47:56 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:08.387 Malloc1 00:09:08.387 16:47:57 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:08.387 16:47:57 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.387 16:47:57 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:08.387 16:47:57 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:08.387 16:47:57 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:08.387 16:47:57 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:08.387 16:47:57 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:08.387 16:47:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.387 16:47:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:08.387 16:47:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:08.387 16:47:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:08.387 16:47:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:08.387 16:47:57 -- bdev/nbd_common.sh@12 -- # local i 00:09:08.387 16:47:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:08.387 16:47:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:08.387 16:47:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:08.646 /dev/nbd0 00:09:08.646 16:47:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:08.646 16:47:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:08.646 16:47:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:08.646 16:47:57 -- common/autotest_common.sh@867 -- # local i 00:09:08.646 16:47:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:08.646 16:47:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:08.646 16:47:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:08.646 16:47:57 -- common/autotest_common.sh@871 -- # break 00:09:08.646 16:47:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:08.646 16:47:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:08.646 16:47:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:08.646 1+0 records in 00:09:08.646 1+0 records out 00:09:08.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054692 s, 7.5 MB/s 00:09:08.646 16:47:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:08.646 16:47:57 -- common/autotest_common.sh@884 -- # size=4096 00:09:08.646 16:47:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:08.646 16:47:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:08.646 16:47:57 -- common/autotest_common.sh@887 -- # return 0 00:09:08.646 16:47:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:08.646 16:47:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:08.647 16:47:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:08.907 /dev/nbd1 00:09:08.907 16:47:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:08.907 16:47:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:08.907 16:47:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:08.907 16:47:57 -- common/autotest_common.sh@867 -- # local i 00:09:08.907 16:47:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:08.907 16:47:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:08.907 16:47:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:08.907 16:47:57 -- common/autotest_common.sh@871 -- # break 00:09:08.907 16:47:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:08.907 16:47:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:08.907 16:47:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:08.907 1+0 records in 00:09:08.907 1+0 records out 00:09:08.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216818 s, 18.9 MB/s 00:09:08.907 16:47:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:08.907 16:47:57 -- common/autotest_common.sh@884 -- # size=4096 00:09:08.907 16:47:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:08.907 16:47:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:08.907 16:47:57 -- common/autotest_common.sh@887 -- # return 0 00:09:08.907 16:47:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:08.907 16:47:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:08.907 16:47:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:08.907 16:47:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.907 16:47:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:09.166 16:47:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:09.166 { 00:09:09.166 "nbd_device": "/dev/nbd0", 00:09:09.166 "bdev_name": "Malloc0" 00:09:09.166 }, 00:09:09.166 { 00:09:09.166 "nbd_device": "/dev/nbd1", 00:09:09.166 "bdev_name": "Malloc1" 00:09:09.166 } 00:09:09.166 ]' 00:09:09.166 16:47:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:09.166 { 00:09:09.166 "nbd_device": "/dev/nbd0", 00:09:09.166 "bdev_name": "Malloc0" 00:09:09.166 }, 00:09:09.166 { 00:09:09.166 "nbd_device": "/dev/nbd1", 00:09:09.166 "bdev_name": "Malloc1" 00:09:09.166 } 00:09:09.166 ]' 00:09:09.166 16:47:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:09.166 16:47:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:09.166 /dev/nbd1' 00:09:09.166 16:47:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:09.166 /dev/nbd1' 00:09:09.166 16:47:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:09.166 16:47:58 -- bdev/nbd_common.sh@65 -- # count=2 00:09:09.166 16:47:58 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:09.166 16:47:58 -- bdev/nbd_common.sh@95 -- # count=2 00:09:09.166 16:47:58 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:09.166 16:47:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:09.166 16:47:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.166 16:47:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:09.166 16:47:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:09.166 16:47:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:09.166 16:47:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:09.166 16:47:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:09.166 256+0 records in 00:09:09.166 256+0 records out 00:09:09.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106784 s, 98.2 MB/s 00:09:09.166 16:47:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:09.166 16:47:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:09.425 256+0 records in 00:09:09.425 256+0 records out 00:09:09.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198184 s, 52.9 MB/s 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:09.425 256+0 records in 00:09:09.425 256+0 records out 00:09:09.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262679 s, 39.9 MB/s 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@51 -- # local i 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.425 16:47:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:09.684 16:47:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:09.684 16:47:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:09.684 16:47:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:09.684 16:47:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.684 16:47:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.684 16:47:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:09.684 16:47:58 -- bdev/nbd_common.sh@41 -- # break 00:09:09.684 16:47:58 -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.684 16:47:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.684 16:47:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:09.943 16:47:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:09.943 16:47:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:09.943 16:47:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:09.943 16:47:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.943 16:47:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.943 16:47:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:09.943 16:47:58 -- bdev/nbd_common.sh@41 -- # break 00:09:09.943 16:47:58 -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.943 16:47:58 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:09.943 16:47:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.943 16:47:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:10.202 16:47:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:10.202 16:47:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:10.202 16:47:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:10.202 16:47:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:10.202 16:47:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:10.202 16:47:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:10.202 16:47:58 -- bdev/nbd_common.sh@65 -- # true 00:09:10.202 16:47:58 -- bdev/nbd_common.sh@65 -- # count=0 00:09:10.202 16:47:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:10.202 16:47:58 -- bdev/nbd_common.sh@104 -- # count=0 00:09:10.202 16:47:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:10.202 16:47:58 -- bdev/nbd_common.sh@109 -- # return 0 00:09:10.202 16:47:58 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:10.461 16:47:59 -- event/event.sh@35 -- # sleep 3 00:09:11.836 [2024-11-05 16:48:00.316479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:11.836 [2024-11-05 16:48:00.491448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.836 [2024-11-05 16:48:00.491457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.836 [2024-11-05 16:48:00.661687] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:11.836 [2024-11-05 16:48:00.661990] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:13.739 spdk_app_start Round 2 00:09:13.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:13.739 16:48:02 -- event/event.sh@23 -- # for i in {0..2} 00:09:13.739 16:48:02 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:13.739 16:48:02 -- event/event.sh@25 -- # waitforlisten 104460 /var/tmp/spdk-nbd.sock 00:09:13.739 16:48:02 -- common/autotest_common.sh@829 -- # '[' -z 104460 ']' 00:09:13.739 16:48:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:13.739 16:48:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:13.739 16:48:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:13.739 16:48:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:13.739 16:48:02 -- common/autotest_common.sh@10 -- # set +x 00:09:13.739 16:48:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:13.739 16:48:02 -- common/autotest_common.sh@862 -- # return 0 00:09:13.739 16:48:02 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:13.998 Malloc0 00:09:13.998 16:48:02 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:14.566 Malloc1 00:09:14.566 16:48:03 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@12 -- # local i 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:14.566 /dev/nbd0 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:14.566 16:48:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:14.566 16:48:03 -- common/autotest_common.sh@867 -- # local i 00:09:14.566 16:48:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:14.566 16:48:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:14.566 16:48:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:14.566 16:48:03 -- common/autotest_common.sh@871 -- # break 00:09:14.566 16:48:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:14.566 16:48:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:14.566 16:48:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:14.566 1+0 records in 00:09:14.566 1+0 records out 00:09:14.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651407 s, 6.3 MB/s 00:09:14.566 16:48:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:14.566 16:48:03 -- common/autotest_common.sh@884 -- # size=4096 00:09:14.566 16:48:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:14.566 16:48:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:14.566 16:48:03 -- common/autotest_common.sh@887 -- # return 0 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:14.566 16:48:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:14.825 /dev/nbd1 00:09:14.825 16:48:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:14.825 16:48:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:14.825 16:48:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:14.825 16:48:03 -- common/autotest_common.sh@867 -- # local i 00:09:14.825 16:48:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:14.825 16:48:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:14.825 16:48:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:14.825 16:48:03 -- common/autotest_common.sh@871 -- # break 00:09:14.825 16:48:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:14.825 16:48:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:14.825 16:48:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:14.825 1+0 records in 00:09:14.825 1+0 records out 00:09:14.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731247 s, 5.6 MB/s 00:09:14.825 16:48:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:15.084 16:48:03 -- common/autotest_common.sh@884 -- # size=4096 00:09:15.084 16:48:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:15.084 16:48:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:15.084 16:48:03 -- common/autotest_common.sh@887 -- # return 0 00:09:15.084 16:48:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.084 16:48:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:15.084 16:48:03 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:15.084 16:48:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.084 16:48:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:15.084 16:48:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:15.084 { 00:09:15.084 "nbd_device": "/dev/nbd0", 00:09:15.084 "bdev_name": "Malloc0" 00:09:15.084 }, 00:09:15.084 { 00:09:15.084 "nbd_device": "/dev/nbd1", 00:09:15.084 "bdev_name": "Malloc1" 00:09:15.084 } 00:09:15.084 ]' 00:09:15.084 16:48:03 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:15.084 { 00:09:15.084 "nbd_device": "/dev/nbd0", 00:09:15.084 "bdev_name": "Malloc0" 00:09:15.084 }, 00:09:15.084 { 00:09:15.084 "nbd_device": "/dev/nbd1", 00:09:15.084 "bdev_name": "Malloc1" 00:09:15.084 } 00:09:15.084 ]' 00:09:15.084 16:48:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:15.395 16:48:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:15.395 /dev/nbd1' 00:09:15.395 16:48:03 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:15.395 /dev/nbd1' 00:09:15.395 16:48:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:15.395 16:48:03 -- bdev/nbd_common.sh@65 -- # count=2 00:09:15.395 16:48:03 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:15.395 16:48:03 -- bdev/nbd_common.sh@95 -- # count=2 00:09:15.395 16:48:03 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:15.395 16:48:03 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:15.395 16:48:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.395 16:48:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:15.395 16:48:03 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:15.395 16:48:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:15.395 16:48:03 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:15.395 16:48:03 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:15.395 256+0 records in 00:09:15.395 256+0 records out 00:09:15.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109858 s, 95.4 MB/s 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:15.395 256+0 records in 00:09:15.395 256+0 records out 00:09:15.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229116 s, 45.8 MB/s 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:15.395 256+0 records in 00:09:15.395 256+0 records out 00:09:15.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314499 s, 33.3 MB/s 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@51 -- # local i 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.395 16:48:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:15.653 16:48:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:15.653 16:48:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:15.653 16:48:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:15.653 16:48:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.653 16:48:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.653 16:48:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:15.653 16:48:04 -- bdev/nbd_common.sh@41 -- # break 00:09:15.653 16:48:04 -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.653 16:48:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.653 16:48:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:15.912 16:48:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:15.912 16:48:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:15.912 16:48:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:15.912 16:48:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.912 16:48:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.912 16:48:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:15.912 16:48:04 -- bdev/nbd_common.sh@41 -- # break 00:09:15.912 16:48:04 -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.912 16:48:04 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:15.912 16:48:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.912 16:48:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:16.170 16:48:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:16.170 16:48:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:16.170 16:48:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:16.170 16:48:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:16.170 16:48:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:16.170 16:48:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:16.170 16:48:04 -- bdev/nbd_common.sh@65 -- # true 00:09:16.170 16:48:04 -- bdev/nbd_common.sh@65 -- # count=0 00:09:16.170 16:48:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:16.170 16:48:04 -- bdev/nbd_common.sh@104 -- # count=0 00:09:16.170 16:48:04 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:16.170 16:48:04 -- bdev/nbd_common.sh@109 -- # return 0 00:09:16.170 16:48:04 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:16.429 16:48:05 -- event/event.sh@35 -- # sleep 3 00:09:17.365 [2024-11-05 16:48:06.178819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:17.624 [2024-11-05 16:48:06.320452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.624 [2024-11-05 16:48:06.320463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.624 [2024-11-05 16:48:06.474511] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:17.624 [2024-11-05 16:48:06.474642] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:19.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:19.527 16:48:08 -- event/event.sh@38 -- # waitforlisten 104460 /var/tmp/spdk-nbd.sock 00:09:19.527 16:48:08 -- common/autotest_common.sh@829 -- # '[' -z 104460 ']' 00:09:19.527 16:48:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:19.527 16:48:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:19.527 16:48:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:19.527 16:48:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:19.527 16:48:08 -- common/autotest_common.sh@10 -- # set +x 00:09:19.786 16:48:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:19.786 16:48:08 -- common/autotest_common.sh@862 -- # return 0 00:09:19.786 16:48:08 -- event/event.sh@39 -- # killprocess 104460 00:09:19.786 16:48:08 -- common/autotest_common.sh@936 -- # '[' -z 104460 ']' 00:09:19.786 16:48:08 -- common/autotest_common.sh@940 -- # kill -0 104460 00:09:19.786 16:48:08 -- common/autotest_common.sh@941 -- # uname 00:09:19.786 16:48:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:19.786 16:48:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104460 00:09:19.786 16:48:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:19.786 16:48:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:19.786 16:48:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104460' 00:09:19.786 killing process with pid 104460 00:09:19.786 16:48:08 -- common/autotest_common.sh@955 -- # kill 104460 00:09:19.786 16:48:08 -- common/autotest_common.sh@960 -- # wait 104460 00:09:20.723 spdk_app_start is called in Round 0. 00:09:20.724 Shutdown signal received, stop current app iteration 00:09:20.724 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:09:20.724 spdk_app_start is called in Round 1. 00:09:20.724 Shutdown signal received, stop current app iteration 00:09:20.724 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:09:20.724 spdk_app_start is called in Round 2. 00:09:20.724 Shutdown signal received, stop current app iteration 00:09:20.724 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:09:20.724 spdk_app_start is called in Round 3. 00:09:20.724 Shutdown signal received, stop current app iteration 00:09:20.724 ************************************ 00:09:20.724 END TEST app_repeat 00:09:20.724 ************************************ 00:09:20.724 16:48:09 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:20.724 16:48:09 -- event/event.sh@42 -- # return 0 00:09:20.724 00:09:20.724 real 0m19.781s 00:09:20.724 user 0m42.565s 00:09:20.724 sys 0m2.795s 00:09:20.724 16:48:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:20.724 16:48:09 -- common/autotest_common.sh@10 -- # set +x 00:09:20.724 16:48:09 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:20.724 16:48:09 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:20.724 16:48:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:20.724 16:48:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.724 16:48:09 -- common/autotest_common.sh@10 -- # set +x 00:09:20.724 ************************************ 00:09:20.724 START TEST cpu_locks 00:09:20.724 ************************************ 00:09:20.724 16:48:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:20.724 * Looking for test storage... 00:09:20.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:20.724 16:48:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:20.724 16:48:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:20.724 16:48:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:20.724 16:48:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:20.724 16:48:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:20.724 16:48:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:20.724 16:48:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:20.724 16:48:09 -- scripts/common.sh@335 -- # IFS=.-: 00:09:20.724 16:48:09 -- scripts/common.sh@335 -- # read -ra ver1 00:09:20.724 16:48:09 -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.724 16:48:09 -- scripts/common.sh@336 -- # read -ra ver2 00:09:20.724 16:48:09 -- scripts/common.sh@337 -- # local 'op=<' 00:09:20.724 16:48:09 -- scripts/common.sh@339 -- # ver1_l=2 00:09:20.724 16:48:09 -- scripts/common.sh@340 -- # ver2_l=1 00:09:20.724 16:48:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:20.724 16:48:09 -- scripts/common.sh@343 -- # case "$op" in 00:09:20.724 16:48:09 -- scripts/common.sh@344 -- # : 1 00:09:20.724 16:48:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:20.724 16:48:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.724 16:48:09 -- scripts/common.sh@364 -- # decimal 1 00:09:20.724 16:48:09 -- scripts/common.sh@352 -- # local d=1 00:09:20.724 16:48:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.724 16:48:09 -- scripts/common.sh@354 -- # echo 1 00:09:20.724 16:48:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:20.724 16:48:09 -- scripts/common.sh@365 -- # decimal 2 00:09:20.724 16:48:09 -- scripts/common.sh@352 -- # local d=2 00:09:20.724 16:48:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.724 16:48:09 -- scripts/common.sh@354 -- # echo 2 00:09:20.724 16:48:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:20.724 16:48:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:20.724 16:48:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:20.724 16:48:09 -- scripts/common.sh@367 -- # return 0 00:09:20.724 16:48:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.724 16:48:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:20.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.724 --rc genhtml_branch_coverage=1 00:09:20.724 --rc genhtml_function_coverage=1 00:09:20.724 --rc genhtml_legend=1 00:09:20.724 --rc geninfo_all_blocks=1 00:09:20.724 --rc geninfo_unexecuted_blocks=1 00:09:20.724 00:09:20.724 ' 00:09:20.724 16:48:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:20.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.724 --rc genhtml_branch_coverage=1 00:09:20.724 --rc genhtml_function_coverage=1 00:09:20.724 --rc genhtml_legend=1 00:09:20.724 --rc geninfo_all_blocks=1 00:09:20.724 --rc geninfo_unexecuted_blocks=1 00:09:20.724 00:09:20.724 ' 00:09:20.724 16:48:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:20.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.724 --rc genhtml_branch_coverage=1 00:09:20.724 --rc genhtml_function_coverage=1 00:09:20.724 --rc genhtml_legend=1 00:09:20.724 --rc geninfo_all_blocks=1 00:09:20.724 --rc geninfo_unexecuted_blocks=1 00:09:20.724 00:09:20.724 ' 00:09:20.724 16:48:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:20.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.724 --rc genhtml_branch_coverage=1 00:09:20.724 --rc genhtml_function_coverage=1 00:09:20.724 --rc genhtml_legend=1 00:09:20.724 --rc geninfo_all_blocks=1 00:09:20.724 --rc geninfo_unexecuted_blocks=1 00:09:20.724 00:09:20.724 ' 00:09:20.724 16:48:09 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:20.724 16:48:09 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:20.724 16:48:09 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:20.724 16:48:09 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:20.724 16:48:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:20.724 16:48:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.724 16:48:09 -- common/autotest_common.sh@10 -- # set +x 00:09:20.983 ************************************ 00:09:20.983 START TEST default_locks 00:09:20.983 ************************************ 00:09:20.983 16:48:09 -- common/autotest_common.sh@1114 -- # default_locks 00:09:20.983 16:48:09 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=104989 00:09:20.983 16:48:09 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:20.983 16:48:09 -- event/cpu_locks.sh@47 -- # waitforlisten 104989 00:09:20.983 16:48:09 -- common/autotest_common.sh@829 -- # '[' -z 104989 ']' 00:09:20.983 16:48:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.983 16:48:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.983 16:48:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.983 16:48:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.983 16:48:09 -- common/autotest_common.sh@10 -- # set +x 00:09:20.983 [2024-11-05 16:48:09.694208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:20.983 [2024-11-05 16:48:09.694407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104989 ] 00:09:20.983 [2024-11-05 16:48:09.856828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.242 [2024-11-05 16:48:10.012438] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:21.242 [2024-11-05 16:48:10.012968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.619 16:48:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.619 16:48:11 -- common/autotest_common.sh@862 -- # return 0 00:09:22.619 16:48:11 -- event/cpu_locks.sh@49 -- # locks_exist 104989 00:09:22.619 16:48:11 -- event/cpu_locks.sh@22 -- # lslocks -p 104989 00:09:22.619 16:48:11 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:22.877 16:48:11 -- event/cpu_locks.sh@50 -- # killprocess 104989 00:09:22.877 16:48:11 -- common/autotest_common.sh@936 -- # '[' -z 104989 ']' 00:09:22.877 16:48:11 -- common/autotest_common.sh@940 -- # kill -0 104989 00:09:22.877 16:48:11 -- common/autotest_common.sh@941 -- # uname 00:09:22.877 16:48:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:22.877 16:48:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104989 00:09:22.877 16:48:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:22.877 16:48:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:22.877 16:48:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104989' 00:09:22.877 killing process with pid 104989 00:09:22.877 16:48:11 -- common/autotest_common.sh@955 -- # kill 104989 00:09:22.877 16:48:11 -- common/autotest_common.sh@960 -- # wait 104989 00:09:24.780 16:48:13 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 104989 00:09:24.780 16:48:13 -- common/autotest_common.sh@650 -- # local es=0 00:09:24.780 16:48:13 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 104989 00:09:24.780 16:48:13 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:24.780 16:48:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.780 16:48:13 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:24.780 16:48:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.780 16:48:13 -- common/autotest_common.sh@653 -- # waitforlisten 104989 00:09:24.780 16:48:13 -- common/autotest_common.sh@829 -- # '[' -z 104989 ']' 00:09:24.780 16:48:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.780 16:48:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:24.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.780 16:48:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.780 16:48:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:24.780 16:48:13 -- common/autotest_common.sh@10 -- # set +x 00:09:24.781 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (104989) - No such process 00:09:24.781 ERROR: process (pid: 104989) is no longer running 00:09:24.781 16:48:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:24.781 16:48:13 -- common/autotest_common.sh@862 -- # return 1 00:09:24.781 16:48:13 -- common/autotest_common.sh@653 -- # es=1 00:09:24.781 16:48:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:24.781 16:48:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:24.781 16:48:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:24.781 16:48:13 -- event/cpu_locks.sh@54 -- # no_locks 00:09:24.781 16:48:13 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:24.781 16:48:13 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:24.781 16:48:13 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:24.781 00:09:24.781 real 0m3.785s 00:09:24.781 user 0m3.877s 00:09:24.781 sys 0m0.701s 00:09:24.781 16:48:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:24.781 16:48:13 -- common/autotest_common.sh@10 -- # set +x 00:09:24.781 ************************************ 00:09:24.781 END TEST default_locks 00:09:24.781 ************************************ 00:09:24.781 16:48:13 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:24.781 16:48:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:24.781 16:48:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.781 16:48:13 -- common/autotest_common.sh@10 -- # set +x 00:09:24.781 ************************************ 00:09:24.781 START TEST default_locks_via_rpc 00:09:24.781 ************************************ 00:09:24.781 16:48:13 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:09:24.781 16:48:13 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=105074 00:09:24.781 16:48:13 -- event/cpu_locks.sh@63 -- # waitforlisten 105074 00:09:24.781 16:48:13 -- common/autotest_common.sh@829 -- # '[' -z 105074 ']' 00:09:24.781 16:48:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.781 16:48:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:24.781 16:48:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.781 16:48:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:24.781 16:48:13 -- common/autotest_common.sh@10 -- # set +x 00:09:24.781 16:48:13 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:24.781 [2024-11-05 16:48:13.517413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:24.781 [2024-11-05 16:48:13.517783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105074 ] 00:09:24.781 [2024-11-05 16:48:13.670208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.039 [2024-11-05 16:48:13.842436] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:25.039 [2024-11-05 16:48:13.842980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.416 16:48:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.416 16:48:15 -- common/autotest_common.sh@862 -- # return 0 00:09:26.417 16:48:15 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:26.417 16:48:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.417 16:48:15 -- common/autotest_common.sh@10 -- # set +x 00:09:26.417 16:48:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.417 16:48:15 -- event/cpu_locks.sh@67 -- # no_locks 00:09:26.417 16:48:15 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:26.417 16:48:15 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:26.417 16:48:15 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:26.417 16:48:15 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:26.417 16:48:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.417 16:48:15 -- common/autotest_common.sh@10 -- # set +x 00:09:26.417 16:48:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.417 16:48:15 -- event/cpu_locks.sh@71 -- # locks_exist 105074 00:09:26.417 16:48:15 -- event/cpu_locks.sh@22 -- # lslocks -p 105074 00:09:26.417 16:48:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:26.675 16:48:15 -- event/cpu_locks.sh@73 -- # killprocess 105074 00:09:26.675 16:48:15 -- common/autotest_common.sh@936 -- # '[' -z 105074 ']' 00:09:26.675 16:48:15 -- common/autotest_common.sh@940 -- # kill -0 105074 00:09:26.675 16:48:15 -- common/autotest_common.sh@941 -- # uname 00:09:26.675 16:48:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:26.675 16:48:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105074 00:09:26.675 16:48:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:26.675 16:48:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:26.675 16:48:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 105074' 00:09:26.675 killing process with pid 105074 00:09:26.675 16:48:15 -- common/autotest_common.sh@955 -- # kill 105074 00:09:26.675 16:48:15 -- common/autotest_common.sh@960 -- # wait 105074 00:09:28.577 00:09:28.577 real 0m3.797s 00:09:28.577 user 0m3.979s 00:09:28.577 sys 0m0.673s 00:09:28.577 16:48:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:28.577 16:48:17 -- common/autotest_common.sh@10 -- # set +x 00:09:28.577 ************************************ 00:09:28.577 END TEST default_locks_via_rpc 00:09:28.577 ************************************ 00:09:28.577 16:48:17 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:28.577 16:48:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:28.577 16:48:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:28.577 16:48:17 -- common/autotest_common.sh@10 -- # set +x 00:09:28.577 ************************************ 00:09:28.577 START TEST non_locking_app_on_locked_coremask 00:09:28.577 ************************************ 00:09:28.577 16:48:17 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:09:28.577 16:48:17 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=105156 00:09:28.577 16:48:17 -- event/cpu_locks.sh@81 -- # waitforlisten 105156 /var/tmp/spdk.sock 00:09:28.577 16:48:17 -- common/autotest_common.sh@829 -- # '[' -z 105156 ']' 00:09:28.577 16:48:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.577 16:48:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.577 16:48:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.577 16:48:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.577 16:48:17 -- common/autotest_common.sh@10 -- # set +x 00:09:28.577 16:48:17 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:28.577 [2024-11-05 16:48:17.364197] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:28.577 [2024-11-05 16:48:17.364550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105156 ] 00:09:28.836 [2024-11-05 16:48:17.519403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.836 [2024-11-05 16:48:17.684132] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:28.836 [2024-11-05 16:48:17.684681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.213 16:48:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.213 16:48:18 -- common/autotest_common.sh@862 -- # return 0 00:09:30.213 16:48:18 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=105191 00:09:30.213 16:48:18 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:30.213 16:48:18 -- event/cpu_locks.sh@85 -- # waitforlisten 105191 /var/tmp/spdk2.sock 00:09:30.213 16:48:18 -- common/autotest_common.sh@829 -- # '[' -z 105191 ']' 00:09:30.213 16:48:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:30.213 16:48:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.213 16:48:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:30.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:30.213 16:48:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.213 16:48:18 -- common/autotest_common.sh@10 -- # set +x 00:09:30.213 [2024-11-05 16:48:19.060377] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:30.213 [2024-11-05 16:48:19.060576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105191 ] 00:09:30.471 [2024-11-05 16:48:19.222705] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:30.471 [2024-11-05 16:48:19.235040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.730 [2024-11-05 16:48:19.569390] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:30.730 [2024-11-05 16:48:19.569929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.630 16:48:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.630 16:48:21 -- common/autotest_common.sh@862 -- # return 0 00:09:32.630 16:48:21 -- event/cpu_locks.sh@87 -- # locks_exist 105156 00:09:32.630 16:48:21 -- event/cpu_locks.sh@22 -- # lslocks -p 105156 00:09:32.630 16:48:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:32.889 16:48:21 -- event/cpu_locks.sh@89 -- # killprocess 105156 00:09:32.889 16:48:21 -- common/autotest_common.sh@936 -- # '[' -z 105156 ']' 00:09:32.889 16:48:21 -- common/autotest_common.sh@940 -- # kill -0 105156 00:09:32.889 16:48:21 -- common/autotest_common.sh@941 -- # uname 00:09:32.889 16:48:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:32.889 16:48:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105156 00:09:32.889 16:48:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:32.889 16:48:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:32.889 16:48:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 105156' 00:09:32.889 killing process with pid 105156 00:09:32.889 16:48:21 -- common/autotest_common.sh@955 -- # kill 105156 00:09:32.889 16:48:21 -- common/autotest_common.sh@960 -- # wait 105156 00:09:37.077 16:48:25 -- event/cpu_locks.sh@90 -- # killprocess 105191 00:09:37.077 16:48:25 -- common/autotest_common.sh@936 -- # '[' -z 105191 ']' 00:09:37.077 16:48:25 -- common/autotest_common.sh@940 -- # kill -0 105191 00:09:37.077 16:48:25 -- common/autotest_common.sh@941 -- # uname 00:09:37.077 16:48:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:37.077 16:48:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105191 00:09:37.077 16:48:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:37.077 16:48:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:37.077 16:48:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 105191' 00:09:37.077 killing process with pid 105191 00:09:37.077 16:48:25 -- common/autotest_common.sh@955 -- # kill 105191 00:09:37.077 16:48:25 -- common/autotest_common.sh@960 -- # wait 105191 00:09:38.453 00:09:38.453 real 0m9.768s 00:09:38.453 user 0m10.418s 00:09:38.453 sys 0m1.327s 00:09:38.453 16:48:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.453 16:48:27 -- common/autotest_common.sh@10 -- # set +x 00:09:38.453 ************************************ 00:09:38.453 END TEST non_locking_app_on_locked_coremask 00:09:38.453 ************************************ 00:09:38.453 16:48:27 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:38.453 16:48:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:38.453 16:48:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.453 16:48:27 -- common/autotest_common.sh@10 -- # set +x 00:09:38.453 ************************************ 00:09:38.453 START TEST locking_app_on_unlocked_coremask 00:09:38.453 ************************************ 00:09:38.453 16:48:27 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:09:38.453 16:48:27 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=105326 00:09:38.453 16:48:27 -- event/cpu_locks.sh@99 -- # waitforlisten 105326 /var/tmp/spdk.sock 00:09:38.454 16:48:27 -- common/autotest_common.sh@829 -- # '[' -z 105326 ']' 00:09:38.454 16:48:27 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:38.454 16:48:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.454 16:48:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.454 16:48:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.454 16:48:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.454 16:48:27 -- common/autotest_common.sh@10 -- # set +x 00:09:38.454 [2024-11-05 16:48:27.197682] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:38.454 [2024-11-05 16:48:27.198383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105326 ] 00:09:38.712 [2024-11-05 16:48:27.364023] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:38.712 [2024-11-05 16:48:27.364421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.712 [2024-11-05 16:48:27.535649] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:38.712 [2024-11-05 16:48:27.536079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.088 16:48:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:40.088 16:48:28 -- common/autotest_common.sh@862 -- # return 0 00:09:40.088 16:48:28 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=105354 00:09:40.088 16:48:28 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:40.088 16:48:28 -- event/cpu_locks.sh@103 -- # waitforlisten 105354 /var/tmp/spdk2.sock 00:09:40.088 16:48:28 -- common/autotest_common.sh@829 -- # '[' -z 105354 ']' 00:09:40.088 16:48:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:40.088 16:48:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.088 16:48:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:40.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:40.088 16:48:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.088 16:48:28 -- common/autotest_common.sh@10 -- # set +x 00:09:40.088 [2024-11-05 16:48:28.864549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:40.088 [2024-11-05 16:48:28.864759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105354 ] 00:09:40.347 [2024-11-05 16:48:29.027886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.605 [2024-11-05 16:48:29.379472] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:40.605 [2024-11-05 16:48:29.380218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.505 16:48:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:42.505 16:48:31 -- common/autotest_common.sh@862 -- # return 0 00:09:42.505 16:48:31 -- event/cpu_locks.sh@105 -- # locks_exist 105354 00:09:42.505 16:48:31 -- event/cpu_locks.sh@22 -- # lslocks -p 105354 00:09:42.505 16:48:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:42.763 16:48:31 -- event/cpu_locks.sh@107 -- # killprocess 105326 00:09:42.763 16:48:31 -- common/autotest_common.sh@936 -- # '[' -z 105326 ']' 00:09:42.763 16:48:31 -- common/autotest_common.sh@940 -- # kill -0 105326 00:09:42.763 16:48:31 -- common/autotest_common.sh@941 -- # uname 00:09:42.763 16:48:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:42.763 16:48:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105326 00:09:42.763 16:48:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:42.763 16:48:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:42.763 16:48:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 105326' 00:09:42.763 killing process with pid 105326 00:09:42.763 16:48:31 -- common/autotest_common.sh@955 -- # kill 105326 00:09:42.763 16:48:31 -- common/autotest_common.sh@960 -- # wait 105326 00:09:46.973 16:48:35 -- event/cpu_locks.sh@108 -- # killprocess 105354 00:09:46.973 16:48:35 -- common/autotest_common.sh@936 -- # '[' -z 105354 ']' 00:09:46.973 16:48:35 -- common/autotest_common.sh@940 -- # kill -0 105354 00:09:46.973 16:48:35 -- common/autotest_common.sh@941 -- # uname 00:09:46.973 16:48:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:46.973 16:48:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105354 00:09:46.973 16:48:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:46.973 16:48:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:46.973 16:48:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 105354' 00:09:46.973 killing process with pid 105354 00:09:46.973 16:48:35 -- common/autotest_common.sh@955 -- # kill 105354 00:09:46.973 16:48:35 -- common/autotest_common.sh@960 -- # wait 105354 00:09:48.351 00:09:48.351 real 0m9.807s 00:09:48.351 user 0m10.450s 00:09:48.351 sys 0m1.254s 00:09:48.351 16:48:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:48.351 16:48:36 -- common/autotest_common.sh@10 -- # set +x 00:09:48.351 ************************************ 00:09:48.351 END TEST locking_app_on_unlocked_coremask 00:09:48.351 ************************************ 00:09:48.351 16:48:36 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:48.351 16:48:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:48.351 16:48:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:48.351 16:48:36 -- common/autotest_common.sh@10 -- # set +x 00:09:48.351 ************************************ 00:09:48.351 START TEST locking_app_on_locked_coremask 00:09:48.351 ************************************ 00:09:48.351 16:48:36 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:09:48.351 16:48:36 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=105490 00:09:48.351 16:48:36 -- event/cpu_locks.sh@116 -- # waitforlisten 105490 /var/tmp/spdk.sock 00:09:48.351 16:48:36 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:48.351 16:48:36 -- common/autotest_common.sh@829 -- # '[' -z 105490 ']' 00:09:48.351 16:48:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.351 16:48:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:48.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.351 16:48:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.351 16:48:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:48.351 16:48:36 -- common/autotest_common.sh@10 -- # set +x 00:09:48.351 [2024-11-05 16:48:37.049454] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:48.351 [2024-11-05 16:48:37.050302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105490 ] 00:09:48.351 [2024-11-05 16:48:37.200205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.610 [2024-11-05 16:48:37.383076] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:48.610 [2024-11-05 16:48:37.383731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.987 16:48:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:49.987 16:48:38 -- common/autotest_common.sh@862 -- # return 0 00:09:49.987 16:48:38 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=105525 00:09:49.987 16:48:38 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:49.987 16:48:38 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 105525 /var/tmp/spdk2.sock 00:09:49.987 16:48:38 -- common/autotest_common.sh@650 -- # local es=0 00:09:49.987 16:48:38 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 105525 /var/tmp/spdk2.sock 00:09:49.987 16:48:38 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:49.987 16:48:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.987 16:48:38 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:49.987 16:48:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.987 16:48:38 -- common/autotest_common.sh@653 -- # waitforlisten 105525 /var/tmp/spdk2.sock 00:09:49.987 16:48:38 -- common/autotest_common.sh@829 -- # '[' -z 105525 ']' 00:09:49.987 16:48:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:49.987 16:48:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.987 16:48:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:49.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:49.987 16:48:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.987 16:48:38 -- common/autotest_common.sh@10 -- # set +x 00:09:49.987 [2024-11-05 16:48:38.793401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:49.987 [2024-11-05 16:48:38.794147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105525 ] 00:09:50.246 [2024-11-05 16:48:38.954222] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 105490 has claimed it. 00:09:50.246 [2024-11-05 16:48:38.971040] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:50.814 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (105525) - No such process 00:09:50.814 ERROR: process (pid: 105525) is no longer running 00:09:50.814 16:48:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.815 16:48:39 -- common/autotest_common.sh@862 -- # return 1 00:09:50.815 16:48:39 -- common/autotest_common.sh@653 -- # es=1 00:09:50.815 16:48:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:50.815 16:48:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:50.815 16:48:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:50.815 16:48:39 -- event/cpu_locks.sh@122 -- # locks_exist 105490 00:09:50.815 16:48:39 -- event/cpu_locks.sh@22 -- # lslocks -p 105490 00:09:50.815 16:48:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:50.815 16:48:39 -- event/cpu_locks.sh@124 -- # killprocess 105490 00:09:50.815 16:48:39 -- common/autotest_common.sh@936 -- # '[' -z 105490 ']' 00:09:50.815 16:48:39 -- common/autotest_common.sh@940 -- # kill -0 105490 00:09:50.815 16:48:39 -- common/autotest_common.sh@941 -- # uname 00:09:50.815 16:48:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:50.815 16:48:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105490 00:09:50.815 16:48:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:50.815 16:48:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:50.815 16:48:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 105490' 00:09:50.815 killing process with pid 105490 00:09:50.815 16:48:39 -- common/autotest_common.sh@955 -- # kill 105490 00:09:50.815 16:48:39 -- common/autotest_common.sh@960 -- # wait 105490 00:09:52.717 00:09:52.717 real 0m4.497s 00:09:52.717 user 0m4.970s 00:09:52.717 sys 0m0.725s 00:09:52.717 16:48:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:52.717 16:48:41 -- common/autotest_common.sh@10 -- # set +x 00:09:52.717 ************************************ 00:09:52.717 END TEST locking_app_on_locked_coremask 00:09:52.717 ************************************ 00:09:52.717 16:48:41 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:52.717 16:48:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:52.717 16:48:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.717 16:48:41 -- common/autotest_common.sh@10 -- # set +x 00:09:52.717 ************************************ 00:09:52.717 START TEST locking_overlapped_coremask 00:09:52.717 ************************************ 00:09:52.717 16:48:41 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:09:52.717 16:48:41 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=105589 00:09:52.717 16:48:41 -- event/cpu_locks.sh@133 -- # waitforlisten 105589 /var/tmp/spdk.sock 00:09:52.717 16:48:41 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:52.717 16:48:41 -- common/autotest_common.sh@829 -- # '[' -z 105589 ']' 00:09:52.717 16:48:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.717 16:48:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:52.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.717 16:48:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.717 16:48:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:52.717 16:48:41 -- common/autotest_common.sh@10 -- # set +x 00:09:52.717 [2024-11-05 16:48:41.607344] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:52.717 [2024-11-05 16:48:41.607526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105589 ] 00:09:52.976 [2024-11-05 16:48:41.782928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:53.235 [2024-11-05 16:48:41.944124] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:53.235 [2024-11-05 16:48:41.944786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.235 [2024-11-05 16:48:41.944879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.235 [2024-11-05 16:48:41.944995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.612 16:48:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:54.612 16:48:43 -- common/autotest_common.sh@862 -- # return 0 00:09:54.612 16:48:43 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=105619 00:09:54.612 16:48:43 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:54.612 16:48:43 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 105619 /var/tmp/spdk2.sock 00:09:54.612 16:48:43 -- common/autotest_common.sh@650 -- # local es=0 00:09:54.612 16:48:43 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 105619 /var/tmp/spdk2.sock 00:09:54.612 16:48:43 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:54.612 16:48:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:54.612 16:48:43 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:54.612 16:48:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:54.612 16:48:43 -- common/autotest_common.sh@653 -- # waitforlisten 105619 /var/tmp/spdk2.sock 00:09:54.612 16:48:43 -- common/autotest_common.sh@829 -- # '[' -z 105619 ']' 00:09:54.612 16:48:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:54.612 16:48:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:54.612 16:48:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:54.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:54.612 16:48:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:54.612 16:48:43 -- common/autotest_common.sh@10 -- # set +x 00:09:54.612 [2024-11-05 16:48:43.304399] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:54.613 [2024-11-05 16:48:43.304597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105619 ] 00:09:54.613 [2024-11-05 16:48:43.489262] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 105589 has claimed it. 00:09:54.613 [2024-11-05 16:48:43.489355] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:55.180 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (105619) - No such process 00:09:55.181 ERROR: process (pid: 105619) is no longer running 00:09:55.181 16:48:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:55.181 16:48:43 -- common/autotest_common.sh@862 -- # return 1 00:09:55.181 16:48:43 -- common/autotest_common.sh@653 -- # es=1 00:09:55.181 16:48:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:55.181 16:48:43 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:55.181 16:48:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:55.181 16:48:43 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:55.181 16:48:43 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:55.181 16:48:43 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:55.181 16:48:43 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:55.181 16:48:43 -- event/cpu_locks.sh@141 -- # killprocess 105589 00:09:55.181 16:48:43 -- common/autotest_common.sh@936 -- # '[' -z 105589 ']' 00:09:55.181 16:48:43 -- common/autotest_common.sh@940 -- # kill -0 105589 00:09:55.181 16:48:43 -- common/autotest_common.sh@941 -- # uname 00:09:55.181 16:48:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:55.181 16:48:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105589 00:09:55.181 16:48:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:55.181 16:48:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:55.181 16:48:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 105589' 00:09:55.181 killing process with pid 105589 00:09:55.181 16:48:43 -- common/autotest_common.sh@955 -- # kill 105589 00:09:55.181 16:48:43 -- common/autotest_common.sh@960 -- # wait 105589 00:09:57.105 00:09:57.105 real 0m4.334s 00:09:57.105 user 0m11.804s 00:09:57.105 sys 0m0.588s 00:09:57.105 16:48:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:57.105 16:48:45 -- common/autotest_common.sh@10 -- # set +x 00:09:57.105 ************************************ 00:09:57.105 END TEST locking_overlapped_coremask 00:09:57.105 ************************************ 00:09:57.105 16:48:45 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:57.105 16:48:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:57.105 16:48:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:57.105 16:48:45 -- common/autotest_common.sh@10 -- # set +x 00:09:57.105 ************************************ 00:09:57.105 START TEST locking_overlapped_coremask_via_rpc 00:09:57.105 ************************************ 00:09:57.105 16:48:45 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:09:57.105 16:48:45 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=105683 00:09:57.105 16:48:45 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:57.105 16:48:45 -- event/cpu_locks.sh@149 -- # waitforlisten 105683 /var/tmp/spdk.sock 00:09:57.105 16:48:45 -- common/autotest_common.sh@829 -- # '[' -z 105683 ']' 00:09:57.105 16:48:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.106 16:48:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.106 16:48:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.106 16:48:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.106 16:48:45 -- common/autotest_common.sh@10 -- # set +x 00:09:57.106 [2024-11-05 16:48:45.990062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:57.106 [2024-11-05 16:48:45.990282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105683 ] 00:09:57.364 [2024-11-05 16:48:46.171198] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:57.364 [2024-11-05 16:48:46.171450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:57.622 [2024-11-05 16:48:46.346440] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:57.623 [2024-11-05 16:48:46.346995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.623 [2024-11-05 16:48:46.347179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.623 [2024-11-05 16:48:46.347081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.999 16:48:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.999 16:48:47 -- common/autotest_common.sh@862 -- # return 0 00:09:58.999 16:48:47 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=105722 00:09:58.999 16:48:47 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:58.999 16:48:47 -- event/cpu_locks.sh@153 -- # waitforlisten 105722 /var/tmp/spdk2.sock 00:09:58.999 16:48:47 -- common/autotest_common.sh@829 -- # '[' -z 105722 ']' 00:09:58.999 16:48:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:58.999 16:48:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.999 16:48:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:58.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:58.999 16:48:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.999 16:48:47 -- common/autotest_common.sh@10 -- # set +x 00:09:58.999 [2024-11-05 16:48:47.695903] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:58.999 [2024-11-05 16:48:47.696664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105722 ] 00:09:58.999 [2024-11-05 16:48:47.880897] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:58.999 [2024-11-05 16:48:47.880966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:59.567 [2024-11-05 16:48:48.267093] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:59.567 [2024-11-05 16:48:48.267514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.567 [2024-11-05 16:48:48.267635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.567 [2024-11-05 16:48:48.267637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:01.468 16:48:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.468 16:48:50 -- common/autotest_common.sh@862 -- # return 0 00:10:01.468 16:48:50 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:01.468 16:48:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.468 16:48:50 -- common/autotest_common.sh@10 -- # set +x 00:10:01.468 16:48:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.468 16:48:50 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:01.468 16:48:50 -- common/autotest_common.sh@650 -- # local es=0 00:10:01.468 16:48:50 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:01.468 16:48:50 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:01.468 16:48:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:01.468 16:48:50 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:01.468 16:48:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:01.468 16:48:50 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:01.468 16:48:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.468 16:48:50 -- common/autotest_common.sh@10 -- # set +x 00:10:01.468 [2024-11-05 16:48:50.211026] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 105683 has claimed it. 00:10:01.468 request: 00:10:01.468 { 00:10:01.468 "method": "framework_enable_cpumask_locks", 00:10:01.468 "req_id": 1 00:10:01.468 } 00:10:01.468 Got JSON-RPC error response 00:10:01.468 response: 00:10:01.468 { 00:10:01.468 "code": -32603, 00:10:01.468 "message": "Failed to claim CPU core: 2" 00:10:01.468 } 00:10:01.468 16:48:50 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:01.468 16:48:50 -- common/autotest_common.sh@653 -- # es=1 00:10:01.468 16:48:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:01.468 16:48:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:01.468 16:48:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:01.468 16:48:50 -- event/cpu_locks.sh@158 -- # waitforlisten 105683 /var/tmp/spdk.sock 00:10:01.468 16:48:50 -- common/autotest_common.sh@829 -- # '[' -z 105683 ']' 00:10:01.468 16:48:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.468 16:48:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:01.468 16:48:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.468 16:48:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:01.468 16:48:50 -- common/autotest_common.sh@10 -- # set +x 00:10:01.727 16:48:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.727 16:48:50 -- common/autotest_common.sh@862 -- # return 0 00:10:01.727 16:48:50 -- event/cpu_locks.sh@159 -- # waitforlisten 105722 /var/tmp/spdk2.sock 00:10:01.727 16:48:50 -- common/autotest_common.sh@829 -- # '[' -z 105722 ']' 00:10:01.727 16:48:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:01.727 16:48:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:01.727 16:48:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:01.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:01.727 16:48:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:01.727 16:48:50 -- common/autotest_common.sh@10 -- # set +x 00:10:02.028 16:48:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:02.028 16:48:50 -- common/autotest_common.sh@862 -- # return 0 00:10:02.028 16:48:50 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:02.028 16:48:50 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:02.028 16:48:50 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:02.028 16:48:50 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:02.028 00:10:02.028 real 0m4.804s 00:10:02.028 user 0m1.977s 00:10:02.028 sys 0m0.280s 00:10:02.028 16:48:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:02.028 16:48:50 -- common/autotest_common.sh@10 -- # set +x 00:10:02.028 ************************************ 00:10:02.028 END TEST locking_overlapped_coremask_via_rpc 00:10:02.028 ************************************ 00:10:02.028 16:48:50 -- event/cpu_locks.sh@174 -- # cleanup 00:10:02.028 16:48:50 -- event/cpu_locks.sh@15 -- # [[ -z 105683 ]] 00:10:02.028 16:48:50 -- event/cpu_locks.sh@15 -- # killprocess 105683 00:10:02.028 16:48:50 -- common/autotest_common.sh@936 -- # '[' -z 105683 ']' 00:10:02.028 16:48:50 -- common/autotest_common.sh@940 -- # kill -0 105683 00:10:02.028 16:48:50 -- common/autotest_common.sh@941 -- # uname 00:10:02.028 16:48:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:02.028 16:48:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105683 00:10:02.028 16:48:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:02.028 16:48:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:02.028 16:48:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 105683' 00:10:02.028 killing process with pid 105683 00:10:02.028 16:48:50 -- common/autotest_common.sh@955 -- # kill 105683 00:10:02.028 16:48:50 -- common/autotest_common.sh@960 -- # wait 105683 00:10:03.930 16:48:52 -- event/cpu_locks.sh@16 -- # [[ -z 105722 ]] 00:10:03.930 16:48:52 -- event/cpu_locks.sh@16 -- # killprocess 105722 00:10:03.930 16:48:52 -- common/autotest_common.sh@936 -- # '[' -z 105722 ']' 00:10:03.930 16:48:52 -- common/autotest_common.sh@940 -- # kill -0 105722 00:10:03.930 16:48:52 -- common/autotest_common.sh@941 -- # uname 00:10:03.930 16:48:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:03.930 16:48:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105722 00:10:03.930 16:48:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:03.930 16:48:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:03.930 killing process with pid 105722 00:10:03.930 16:48:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 105722' 00:10:03.930 16:48:52 -- common/autotest_common.sh@955 -- # kill 105722 00:10:03.930 16:48:52 -- common/autotest_common.sh@960 -- # wait 105722 00:10:05.831 16:48:54 -- event/cpu_locks.sh@18 -- # rm -f 00:10:05.831 16:48:54 -- event/cpu_locks.sh@1 -- # cleanup 00:10:05.831 16:48:54 -- event/cpu_locks.sh@15 -- # [[ -z 105683 ]] 00:10:05.831 16:48:54 -- event/cpu_locks.sh@15 -- # killprocess 105683 00:10:05.831 16:48:54 -- common/autotest_common.sh@936 -- # '[' -z 105683 ']' 00:10:05.831 16:48:54 -- common/autotest_common.sh@940 -- # kill -0 105683 00:10:05.831 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (105683) - No such process 00:10:05.831 Process with pid 105683 is not found 00:10:05.831 16:48:54 -- common/autotest_common.sh@963 -- # echo 'Process with pid 105683 is not found' 00:10:05.831 16:48:54 -- event/cpu_locks.sh@16 -- # [[ -z 105722 ]] 00:10:05.831 16:48:54 -- event/cpu_locks.sh@16 -- # killprocess 105722 00:10:05.831 16:48:54 -- common/autotest_common.sh@936 -- # '[' -z 105722 ']' 00:10:05.831 16:48:54 -- common/autotest_common.sh@940 -- # kill -0 105722 00:10:05.831 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (105722) - No such process 00:10:05.831 Process with pid 105722 is not found 00:10:05.831 16:48:54 -- common/autotest_common.sh@963 -- # echo 'Process with pid 105722 is not found' 00:10:05.831 16:48:54 -- event/cpu_locks.sh@18 -- # rm -f 00:10:05.831 00:10:05.831 real 0m45.246s 00:10:05.831 user 1m20.442s 00:10:05.831 sys 0m6.535s 00:10:05.831 16:48:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:05.831 16:48:54 -- common/autotest_common.sh@10 -- # set +x 00:10:05.831 ************************************ 00:10:05.831 END TEST cpu_locks 00:10:05.831 ************************************ 00:10:06.090 00:10:06.090 real 1m15.654s 00:10:06.090 user 2m18.925s 00:10:06.090 sys 0m10.286s 00:10:06.090 16:48:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:06.090 16:48:54 -- common/autotest_common.sh@10 -- # set +x 00:10:06.090 ************************************ 00:10:06.090 END TEST event 00:10:06.090 ************************************ 00:10:06.090 16:48:54 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:06.090 16:48:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:06.090 16:48:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:06.090 16:48:54 -- common/autotest_common.sh@10 -- # set +x 00:10:06.090 ************************************ 00:10:06.090 START TEST thread 00:10:06.090 ************************************ 00:10:06.090 16:48:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:06.090 * Looking for test storage... 00:10:06.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:06.090 16:48:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:06.090 16:48:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:06.090 16:48:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:06.090 16:48:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:06.090 16:48:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:06.090 16:48:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:06.090 16:48:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:06.090 16:48:54 -- scripts/common.sh@335 -- # IFS=.-: 00:10:06.090 16:48:54 -- scripts/common.sh@335 -- # read -ra ver1 00:10:06.090 16:48:54 -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.090 16:48:54 -- scripts/common.sh@336 -- # read -ra ver2 00:10:06.090 16:48:54 -- scripts/common.sh@337 -- # local 'op=<' 00:10:06.090 16:48:54 -- scripts/common.sh@339 -- # ver1_l=2 00:10:06.090 16:48:54 -- scripts/common.sh@340 -- # ver2_l=1 00:10:06.090 16:48:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:06.090 16:48:54 -- scripts/common.sh@343 -- # case "$op" in 00:10:06.090 16:48:54 -- scripts/common.sh@344 -- # : 1 00:10:06.090 16:48:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:06.090 16:48:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.090 16:48:54 -- scripts/common.sh@364 -- # decimal 1 00:10:06.090 16:48:54 -- scripts/common.sh@352 -- # local d=1 00:10:06.090 16:48:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.090 16:48:54 -- scripts/common.sh@354 -- # echo 1 00:10:06.090 16:48:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:06.090 16:48:54 -- scripts/common.sh@365 -- # decimal 2 00:10:06.090 16:48:54 -- scripts/common.sh@352 -- # local d=2 00:10:06.090 16:48:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.090 16:48:54 -- scripts/common.sh@354 -- # echo 2 00:10:06.090 16:48:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:06.090 16:48:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:06.090 16:48:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:06.090 16:48:54 -- scripts/common.sh@367 -- # return 0 00:10:06.090 16:48:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.090 16:48:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:06.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.090 --rc genhtml_branch_coverage=1 00:10:06.090 --rc genhtml_function_coverage=1 00:10:06.090 --rc genhtml_legend=1 00:10:06.090 --rc geninfo_all_blocks=1 00:10:06.090 --rc geninfo_unexecuted_blocks=1 00:10:06.090 00:10:06.090 ' 00:10:06.090 16:48:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:06.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.090 --rc genhtml_branch_coverage=1 00:10:06.090 --rc genhtml_function_coverage=1 00:10:06.090 --rc genhtml_legend=1 00:10:06.090 --rc geninfo_all_blocks=1 00:10:06.090 --rc geninfo_unexecuted_blocks=1 00:10:06.090 00:10:06.090 ' 00:10:06.090 16:48:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:06.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.090 --rc genhtml_branch_coverage=1 00:10:06.090 --rc genhtml_function_coverage=1 00:10:06.090 --rc genhtml_legend=1 00:10:06.090 --rc geninfo_all_blocks=1 00:10:06.090 --rc geninfo_unexecuted_blocks=1 00:10:06.090 00:10:06.090 ' 00:10:06.090 16:48:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:06.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.090 --rc genhtml_branch_coverage=1 00:10:06.090 --rc genhtml_function_coverage=1 00:10:06.090 --rc genhtml_legend=1 00:10:06.090 --rc geninfo_all_blocks=1 00:10:06.090 --rc geninfo_unexecuted_blocks=1 00:10:06.090 00:10:06.090 ' 00:10:06.090 16:48:54 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:06.090 16:48:54 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:06.090 16:48:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:06.090 16:48:54 -- common/autotest_common.sh@10 -- # set +x 00:10:06.090 ************************************ 00:10:06.090 START TEST thread_poller_perf 00:10:06.090 ************************************ 00:10:06.090 16:48:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:06.349 [2024-11-05 16:48:54.994206] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:06.349 [2024-11-05 16:48:54.994433] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105919 ] 00:10:06.349 [2024-11-05 16:48:55.167102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.608 [2024-11-05 16:48:55.399227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.608 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:07.983 [2024-11-05T16:48:56.860Z] ====================================== 00:10:07.983 [2024-11-05T16:48:56.860Z] busy:2217382292 (cyc) 00:10:07.983 [2024-11-05T16:48:56.860Z] total_run_count: 362000 00:10:07.983 [2024-11-05T16:48:56.860Z] tsc_hz: 2200000000 (cyc) 00:10:07.983 [2024-11-05T16:48:56.860Z] ====================================== 00:10:07.983 [2024-11-05T16:48:56.860Z] poller_cost: 6125 (cyc), 2784 (nsec) 00:10:07.983 00:10:07.983 real 0m1.786s 00:10:07.983 user 0m1.570s 00:10:07.983 sys 0m0.112s 00:10:07.983 16:48:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:07.983 16:48:56 -- common/autotest_common.sh@10 -- # set +x 00:10:07.983 ************************************ 00:10:07.983 END TEST thread_poller_perf 00:10:07.983 ************************************ 00:10:07.983 16:48:56 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:07.983 16:48:56 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:07.983 16:48:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:07.983 16:48:56 -- common/autotest_common.sh@10 -- # set +x 00:10:07.983 ************************************ 00:10:07.983 START TEST thread_poller_perf 00:10:07.983 ************************************ 00:10:07.983 16:48:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:07.983 [2024-11-05 16:48:56.824156] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:07.983 [2024-11-05 16:48:56.824400] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105964 ] 00:10:08.241 [2024-11-05 16:48:56.990212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.514 [2024-11-05 16:48:57.166096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.514 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:09.899 [2024-11-05T16:48:58.776Z] ====================================== 00:10:09.899 [2024-11-05T16:48:58.776Z] busy:2205105408 (cyc) 00:10:09.899 [2024-11-05T16:48:58.776Z] total_run_count: 4572000 00:10:09.899 [2024-11-05T16:48:58.776Z] tsc_hz: 2200000000 (cyc) 00:10:09.899 [2024-11-05T16:48:58.776Z] ====================================== 00:10:09.899 [2024-11-05T16:48:58.776Z] poller_cost: 482 (cyc), 219 (nsec) 00:10:09.899 00:10:09.899 real 0m1.716s 00:10:09.899 user 0m1.498s 00:10:09.899 sys 0m0.116s 00:10:09.899 16:48:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:09.899 16:48:58 -- common/autotest_common.sh@10 -- # set +x 00:10:09.899 ************************************ 00:10:09.899 END TEST thread_poller_perf 00:10:09.899 ************************************ 00:10:09.899 16:48:58 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:10:09.899 16:48:58 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:09.899 16:48:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:09.899 16:48:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:09.899 16:48:58 -- common/autotest_common.sh@10 -- # set +x 00:10:09.899 ************************************ 00:10:09.899 START TEST thread_spdk_lock 00:10:09.899 ************************************ 00:10:09.899 16:48:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:09.899 [2024-11-05 16:48:58.596781] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:09.899 [2024-11-05 16:48:58.597004] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106014 ] 00:10:09.899 [2024-11-05 16:48:58.770737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:10.157 [2024-11-05 16:48:58.952784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.157 [2024-11-05 16:48:58.952787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.749 [2024-11-05 16:48:59.460107] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 957:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:10.749 [2024-11-05 16:48:59.460431] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:10:10.749 [2024-11-05 16:48:59.460616] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x55ae326b8ac0 00:10:10.749 [2024-11-05 16:48:59.468344] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:10.749 [2024-11-05 16:48:59.468587] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1018:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:10.749 [2024-11-05 16:48:59.468764] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:11.006 Starting test contend 00:10:11.006 Worker Delay Wait us Hold us Total us 00:10:11.006 0 3 121717 189765 311482 00:10:11.006 1 5 53502 290978 344481 00:10:11.006 PASS test contend 00:10:11.006 Starting test hold_by_poller 00:10:11.006 PASS test hold_by_poller 00:10:11.006 Starting test hold_by_message 00:10:11.006 PASS test hold_by_message 00:10:11.006 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:10:11.006 100014 assertions passed 00:10:11.006 0 assertions failed 00:10:11.006 00:10:11.006 real 0m1.254s 00:10:11.006 user 0m1.565s 00:10:11.006 sys 0m0.105s 00:10:11.006 16:48:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:11.006 16:48:59 -- common/autotest_common.sh@10 -- # set +x 00:10:11.006 ************************************ 00:10:11.006 END TEST thread_spdk_lock 00:10:11.006 ************************************ 00:10:11.006 00:10:11.006 real 0m5.056s 00:10:11.006 user 0m4.815s 00:10:11.006 sys 0m0.458s 00:10:11.006 16:48:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:11.006 16:48:59 -- common/autotest_common.sh@10 -- # set +x 00:10:11.006 ************************************ 00:10:11.006 END TEST thread 00:10:11.006 ************************************ 00:10:11.006 16:48:59 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:11.006 16:48:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:11.006 16:48:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:11.006 16:48:59 -- common/autotest_common.sh@10 -- # set +x 00:10:11.006 ************************************ 00:10:11.006 START TEST accel 00:10:11.006 ************************************ 00:10:11.006 16:48:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:11.263 * Looking for test storage... 00:10:11.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:11.263 16:48:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:11.263 16:48:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:11.263 16:48:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:11.263 16:49:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:11.263 16:49:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:11.263 16:49:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:11.263 16:49:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:11.263 16:49:00 -- scripts/common.sh@335 -- # IFS=.-: 00:10:11.263 16:49:00 -- scripts/common.sh@335 -- # read -ra ver1 00:10:11.263 16:49:00 -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.263 16:49:00 -- scripts/common.sh@336 -- # read -ra ver2 00:10:11.263 16:49:00 -- scripts/common.sh@337 -- # local 'op=<' 00:10:11.263 16:49:00 -- scripts/common.sh@339 -- # ver1_l=2 00:10:11.263 16:49:00 -- scripts/common.sh@340 -- # ver2_l=1 00:10:11.263 16:49:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:11.263 16:49:00 -- scripts/common.sh@343 -- # case "$op" in 00:10:11.263 16:49:00 -- scripts/common.sh@344 -- # : 1 00:10:11.263 16:49:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:11.263 16:49:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.263 16:49:00 -- scripts/common.sh@364 -- # decimal 1 00:10:11.263 16:49:00 -- scripts/common.sh@352 -- # local d=1 00:10:11.263 16:49:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.263 16:49:00 -- scripts/common.sh@354 -- # echo 1 00:10:11.263 16:49:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:11.263 16:49:00 -- scripts/common.sh@365 -- # decimal 2 00:10:11.263 16:49:00 -- scripts/common.sh@352 -- # local d=2 00:10:11.263 16:49:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.263 16:49:00 -- scripts/common.sh@354 -- # echo 2 00:10:11.263 16:49:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:11.263 16:49:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:11.263 16:49:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:11.263 16:49:00 -- scripts/common.sh@367 -- # return 0 00:10:11.263 16:49:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.263 16:49:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:11.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.263 --rc genhtml_branch_coverage=1 00:10:11.263 --rc genhtml_function_coverage=1 00:10:11.263 --rc genhtml_legend=1 00:10:11.263 --rc geninfo_all_blocks=1 00:10:11.263 --rc geninfo_unexecuted_blocks=1 00:10:11.263 00:10:11.263 ' 00:10:11.263 16:49:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:11.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.263 --rc genhtml_branch_coverage=1 00:10:11.263 --rc genhtml_function_coverage=1 00:10:11.263 --rc genhtml_legend=1 00:10:11.263 --rc geninfo_all_blocks=1 00:10:11.263 --rc geninfo_unexecuted_blocks=1 00:10:11.263 00:10:11.263 ' 00:10:11.263 16:49:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:11.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.263 --rc genhtml_branch_coverage=1 00:10:11.263 --rc genhtml_function_coverage=1 00:10:11.263 --rc genhtml_legend=1 00:10:11.263 --rc geninfo_all_blocks=1 00:10:11.263 --rc geninfo_unexecuted_blocks=1 00:10:11.263 00:10:11.263 ' 00:10:11.263 16:49:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:11.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.263 --rc genhtml_branch_coverage=1 00:10:11.263 --rc genhtml_function_coverage=1 00:10:11.263 --rc genhtml_legend=1 00:10:11.263 --rc geninfo_all_blocks=1 00:10:11.263 --rc geninfo_unexecuted_blocks=1 00:10:11.263 00:10:11.263 ' 00:10:11.263 16:49:00 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:10:11.263 16:49:00 -- accel/accel.sh@74 -- # get_expected_opcs 00:10:11.263 16:49:00 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:11.263 16:49:00 -- accel/accel.sh@59 -- # spdk_tgt_pid=106100 00:10:11.263 16:49:00 -- accel/accel.sh@60 -- # waitforlisten 106100 00:10:11.263 16:49:00 -- common/autotest_common.sh@829 -- # '[' -z 106100 ']' 00:10:11.263 16:49:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.263 16:49:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:11.263 16:49:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.263 16:49:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:11.263 16:49:00 -- common/autotest_common.sh@10 -- # set +x 00:10:11.263 16:49:00 -- accel/accel.sh@58 -- # build_accel_config 00:10:11.263 16:49:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:11.263 16:49:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.263 16:49:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.263 16:49:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:11.263 16:49:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:11.263 16:49:00 -- accel/accel.sh@41 -- # local IFS=, 00:10:11.263 16:49:00 -- accel/accel.sh@42 -- # jq -r . 00:10:11.263 16:49:00 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:11.521 [2024-11-05 16:49:00.165327] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:11.521 [2024-11-05 16:49:00.165569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106100 ] 00:10:11.521 [2024-11-05 16:49:00.336312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.778 [2024-11-05 16:49:00.515841] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:11.778 [2024-11-05 16:49:00.516456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.149 16:49:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.149 16:49:01 -- common/autotest_common.sh@862 -- # return 0 00:10:13.149 16:49:01 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:13.149 16:49:01 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:10:13.149 16:49:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.149 16:49:01 -- common/autotest_common.sh@10 -- # set +x 00:10:13.149 16:49:01 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:13.149 16:49:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.149 16:49:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # IFS== 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # read -r opc module 00:10:13.149 16:49:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:13.149 16:49:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # IFS== 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # read -r opc module 00:10:13.149 16:49:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:13.149 16:49:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # IFS== 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # read -r opc module 00:10:13.149 16:49:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:13.149 16:49:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # IFS== 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # read -r opc module 00:10:13.149 16:49:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:13.149 16:49:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # IFS== 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # read -r opc module 00:10:13.149 16:49:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:13.149 16:49:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # IFS== 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # read -r opc module 00:10:13.149 16:49:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:13.149 16:49:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # IFS== 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # read -r opc module 00:10:13.149 16:49:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:13.149 16:49:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # IFS== 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # read -r opc module 00:10:13.149 16:49:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:13.149 16:49:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # IFS== 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # read -r opc module 00:10:13.149 16:49:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:13.149 16:49:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # IFS== 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # read -r opc module 00:10:13.149 16:49:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:13.149 16:49:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # IFS== 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # read -r opc module 00:10:13.149 16:49:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:13.149 16:49:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # IFS== 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # read -r opc module 00:10:13.149 16:49:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:13.149 16:49:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # IFS== 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # read -r opc module 00:10:13.149 16:49:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:13.149 16:49:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # IFS== 00:10:13.149 16:49:01 -- accel/accel.sh@64 -- # read -r opc module 00:10:13.149 16:49:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:13.149 16:49:01 -- accel/accel.sh@67 -- # killprocess 106100 00:10:13.149 16:49:01 -- common/autotest_common.sh@936 -- # '[' -z 106100 ']' 00:10:13.149 16:49:01 -- common/autotest_common.sh@940 -- # kill -0 106100 00:10:13.149 16:49:01 -- common/autotest_common.sh@941 -- # uname 00:10:13.149 16:49:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:13.149 16:49:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 106100 00:10:13.149 16:49:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:13.149 16:49:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:13.149 16:49:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 106100' 00:10:13.149 killing process with pid 106100 00:10:13.149 16:49:01 -- common/autotest_common.sh@955 -- # kill 106100 00:10:13.149 16:49:01 -- common/autotest_common.sh@960 -- # wait 106100 00:10:15.049 16:49:03 -- accel/accel.sh@68 -- # trap - ERR 00:10:15.049 16:49:03 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:10:15.049 16:49:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:15.049 16:49:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:15.049 16:49:03 -- common/autotest_common.sh@10 -- # set +x 00:10:15.049 16:49:03 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:10:15.049 16:49:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:15.049 16:49:03 -- accel/accel.sh@12 -- # build_accel_config 00:10:15.049 16:49:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:15.049 16:49:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:15.049 16:49:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:15.049 16:49:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:15.050 16:49:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:15.050 16:49:03 -- accel/accel.sh@41 -- # local IFS=, 00:10:15.050 16:49:03 -- accel/accel.sh@42 -- # jq -r . 00:10:15.050 16:49:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:15.050 16:49:03 -- common/autotest_common.sh@10 -- # set +x 00:10:15.050 16:49:03 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:15.050 16:49:03 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:15.050 16:49:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:15.050 16:49:03 -- common/autotest_common.sh@10 -- # set +x 00:10:15.050 ************************************ 00:10:15.050 START TEST accel_missing_filename 00:10:15.050 ************************************ 00:10:15.050 16:49:03 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:10:15.050 16:49:03 -- common/autotest_common.sh@650 -- # local es=0 00:10:15.050 16:49:03 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:15.050 16:49:03 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:15.050 16:49:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.050 16:49:03 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:15.050 16:49:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.050 16:49:03 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:10:15.050 16:49:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:15.050 16:49:03 -- accel/accel.sh@12 -- # build_accel_config 00:10:15.050 16:49:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:15.050 16:49:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:15.050 16:49:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:15.050 16:49:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:15.050 16:49:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:15.050 16:49:03 -- accel/accel.sh@41 -- # local IFS=, 00:10:15.050 16:49:03 -- accel/accel.sh@42 -- # jq -r . 00:10:15.050 [2024-11-05 16:49:03.876192] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:15.050 [2024-11-05 16:49:03.876395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106196 ] 00:10:15.308 [2024-11-05 16:49:04.044331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.566 [2024-11-05 16:49:04.214656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.566 [2024-11-05 16:49:04.385301] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:16.140 [2024-11-05 16:49:04.798832] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:16.399 A filename is required. 00:10:16.399 16:49:05 -- common/autotest_common.sh@653 -- # es=234 00:10:16.399 16:49:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:16.399 16:49:05 -- common/autotest_common.sh@662 -- # es=106 00:10:16.399 16:49:05 -- common/autotest_common.sh@663 -- # case "$es" in 00:10:16.399 16:49:05 -- common/autotest_common.sh@670 -- # es=1 00:10:16.399 16:49:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:16.399 00:10:16.399 real 0m1.278s 00:10:16.399 user 0m1.035s 00:10:16.399 sys 0m0.194s 00:10:16.399 16:49:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:16.399 16:49:05 -- common/autotest_common.sh@10 -- # set +x 00:10:16.399 ************************************ 00:10:16.399 END TEST accel_missing_filename 00:10:16.399 ************************************ 00:10:16.399 16:49:05 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:16.399 16:49:05 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:10:16.399 16:49:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:16.399 16:49:05 -- common/autotest_common.sh@10 -- # set +x 00:10:16.399 ************************************ 00:10:16.399 START TEST accel_compress_verify 00:10:16.399 ************************************ 00:10:16.399 16:49:05 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:16.399 16:49:05 -- common/autotest_common.sh@650 -- # local es=0 00:10:16.399 16:49:05 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:16.399 16:49:05 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:16.399 16:49:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:16.399 16:49:05 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:16.399 16:49:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:16.399 16:49:05 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:16.399 16:49:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:16.399 16:49:05 -- accel/accel.sh@12 -- # build_accel_config 00:10:16.399 16:49:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:16.399 16:49:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.399 16:49:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.399 16:49:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:16.399 16:49:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:16.399 16:49:05 -- accel/accel.sh@41 -- # local IFS=, 00:10:16.399 16:49:05 -- accel/accel.sh@42 -- # jq -r . 00:10:16.399 [2024-11-05 16:49:05.202026] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:16.399 [2024-11-05 16:49:05.202249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106235 ] 00:10:16.658 [2024-11-05 16:49:05.371055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.658 [2024-11-05 16:49:05.538302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.916 [2024-11-05 16:49:05.725587] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:17.482 [2024-11-05 16:49:06.138141] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:17.740 00:10:17.740 Compression does not support the verify option, aborting. 00:10:17.741 16:49:06 -- common/autotest_common.sh@653 -- # es=161 00:10:17.741 16:49:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:17.741 16:49:06 -- common/autotest_common.sh@662 -- # es=33 00:10:17.741 16:49:06 -- common/autotest_common.sh@663 -- # case "$es" in 00:10:17.741 16:49:06 -- common/autotest_common.sh@670 -- # es=1 00:10:17.741 16:49:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:17.741 00:10:17.741 real 0m1.304s 00:10:17.741 user 0m1.072s 00:10:17.741 sys 0m0.175s 00:10:17.741 ************************************ 00:10:17.741 END TEST accel_compress_verify 00:10:17.741 ************************************ 00:10:17.741 16:49:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:17.741 16:49:06 -- common/autotest_common.sh@10 -- # set +x 00:10:17.741 16:49:06 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:17.741 16:49:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:17.741 16:49:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.741 16:49:06 -- common/autotest_common.sh@10 -- # set +x 00:10:17.741 ************************************ 00:10:17.741 START TEST accel_wrong_workload 00:10:17.741 ************************************ 00:10:17.741 16:49:06 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:10:17.741 16:49:06 -- common/autotest_common.sh@650 -- # local es=0 00:10:17.741 16:49:06 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:17.741 16:49:06 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:17.741 16:49:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.741 16:49:06 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:17.741 16:49:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.741 16:49:06 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:10:17.741 16:49:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:17.741 16:49:06 -- accel/accel.sh@12 -- # build_accel_config 00:10:17.741 16:49:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:17.741 16:49:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:17.741 16:49:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:17.741 16:49:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:17.741 16:49:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:17.741 16:49:06 -- accel/accel.sh@41 -- # local IFS=, 00:10:17.741 16:49:06 -- accel/accel.sh@42 -- # jq -r . 00:10:17.741 Unsupported workload type: foobar 00:10:17.741 [2024-11-05 16:49:06.558133] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:17.741 accel_perf options: 00:10:17.741 [-h help message] 00:10:17.741 [-q queue depth per core] 00:10:17.741 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:17.741 [-T number of threads per core 00:10:17.741 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:17.741 [-t time in seconds] 00:10:17.741 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:17.741 [ dif_verify, , dif_generate, dif_generate_copy 00:10:17.741 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:17.741 [-l for compress/decompress workloads, name of uncompressed input file 00:10:17.741 [-S for crc32c workload, use this seed value (default 0) 00:10:17.741 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:17.741 [-f for fill workload, use this BYTE value (default 255) 00:10:17.741 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:17.741 [-y verify result if this switch is on] 00:10:17.741 [-a tasks to allocate per core (default: same value as -q)] 00:10:17.741 Can be used to spread operations across a wider range of memory. 00:10:17.741 16:49:06 -- common/autotest_common.sh@653 -- # es=1 00:10:17.741 16:49:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:17.741 16:49:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:17.741 16:49:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:17.741 00:10:17.741 real 0m0.072s 00:10:17.741 user 0m0.091s 00:10:17.741 sys 0m0.035s 00:10:17.741 16:49:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:17.741 16:49:06 -- common/autotest_common.sh@10 -- # set +x 00:10:17.741 ************************************ 00:10:17.741 END TEST accel_wrong_workload 00:10:17.741 ************************************ 00:10:17.741 16:49:06 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:17.741 16:49:06 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:10:17.741 16:49:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.741 16:49:06 -- common/autotest_common.sh@10 -- # set +x 00:10:18.000 ************************************ 00:10:18.000 START TEST accel_negative_buffers 00:10:18.000 ************************************ 00:10:18.000 16:49:06 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:18.000 16:49:06 -- common/autotest_common.sh@650 -- # local es=0 00:10:18.000 16:49:06 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:18.000 16:49:06 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:18.000 16:49:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.000 16:49:06 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:18.000 16:49:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.000 16:49:06 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:10:18.000 16:49:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:18.000 16:49:06 -- accel/accel.sh@12 -- # build_accel_config 00:10:18.000 16:49:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:18.000 16:49:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.000 16:49:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.000 16:49:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:18.000 16:49:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:18.000 16:49:06 -- accel/accel.sh@41 -- # local IFS=, 00:10:18.000 16:49:06 -- accel/accel.sh@42 -- # jq -r . 00:10:18.000 -x option must be non-negative. 00:10:18.000 [2024-11-05 16:49:06.676937] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:18.000 accel_perf options: 00:10:18.000 [-h help message] 00:10:18.000 [-q queue depth per core] 00:10:18.000 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:18.000 [-T number of threads per core 00:10:18.000 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:18.000 [-t time in seconds] 00:10:18.000 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:18.000 [ dif_verify, , dif_generate, dif_generate_copy 00:10:18.000 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:18.000 [-l for compress/decompress workloads, name of uncompressed input file 00:10:18.000 [-S for crc32c workload, use this seed value (default 0) 00:10:18.000 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:18.000 [-f for fill workload, use this BYTE value (default 255) 00:10:18.000 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:18.000 [-y verify result if this switch is on] 00:10:18.000 [-a tasks to allocate per core (default: same value as -q)] 00:10:18.000 Can be used to spread operations across a wider range of memory. 00:10:18.000 ************************************ 00:10:18.000 END TEST accel_negative_buffers 00:10:18.000 ************************************ 00:10:18.000 16:49:06 -- common/autotest_common.sh@653 -- # es=1 00:10:18.000 16:49:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:18.000 16:49:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:18.000 16:49:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:18.000 00:10:18.000 real 0m0.072s 00:10:18.000 user 0m0.076s 00:10:18.000 sys 0m0.037s 00:10:18.000 16:49:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:18.000 16:49:06 -- common/autotest_common.sh@10 -- # set +x 00:10:18.000 16:49:06 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:18.000 16:49:06 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:18.000 16:49:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.000 16:49:06 -- common/autotest_common.sh@10 -- # set +x 00:10:18.000 ************************************ 00:10:18.000 START TEST accel_crc32c 00:10:18.000 ************************************ 00:10:18.000 16:49:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:18.000 16:49:06 -- accel/accel.sh@16 -- # local accel_opc 00:10:18.000 16:49:06 -- accel/accel.sh@17 -- # local accel_module 00:10:18.000 16:49:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:18.000 16:49:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:18.000 16:49:06 -- accel/accel.sh@12 -- # build_accel_config 00:10:18.000 16:49:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:18.000 16:49:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.000 16:49:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.000 16:49:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:18.000 16:49:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:18.000 16:49:06 -- accel/accel.sh@41 -- # local IFS=, 00:10:18.000 16:49:06 -- accel/accel.sh@42 -- # jq -r . 00:10:18.000 [2024-11-05 16:49:06.793741] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:18.000 [2024-11-05 16:49:06.794118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106334 ] 00:10:18.259 [2024-11-05 16:49:06.963718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.259 [2024-11-05 16:49:07.133897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.789 16:49:09 -- accel/accel.sh@18 -- # out=' 00:10:20.789 SPDK Configuration: 00:10:20.789 Core mask: 0x1 00:10:20.789 00:10:20.789 Accel Perf Configuration: 00:10:20.789 Workload Type: crc32c 00:10:20.789 CRC-32C seed: 32 00:10:20.789 Transfer size: 4096 bytes 00:10:20.789 Vector count 1 00:10:20.789 Module: software 00:10:20.789 Queue depth: 32 00:10:20.789 Allocate depth: 32 00:10:20.789 # threads/core: 1 00:10:20.789 Run time: 1 seconds 00:10:20.789 Verify: Yes 00:10:20.789 00:10:20.789 Running for 1 seconds... 00:10:20.789 00:10:20.789 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:20.789 ------------------------------------------------------------------------------------ 00:10:20.789 0,0 475104/s 1855 MiB/s 0 0 00:10:20.789 ==================================================================================== 00:10:20.789 Total 475104/s 1855 MiB/s 0 0' 00:10:20.789 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.789 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:20.789 16:49:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:20.789 16:49:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:20.789 16:49:09 -- accel/accel.sh@12 -- # build_accel_config 00:10:20.789 16:49:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:20.789 16:49:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:20.789 16:49:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:20.789 16:49:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:20.789 16:49:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:20.789 16:49:09 -- accel/accel.sh@41 -- # local IFS=, 00:10:20.789 16:49:09 -- accel/accel.sh@42 -- # jq -r . 00:10:20.789 [2024-11-05 16:49:09.112148] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:20.790 [2024-11-05 16:49:09.112534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106371 ] 00:10:20.790 [2024-11-05 16:49:09.282414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.790 [2024-11-05 16:49:09.474040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.790 16:49:09 -- accel/accel.sh@21 -- # val= 00:10:20.790 16:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:20.790 16:49:09 -- accel/accel.sh@21 -- # val= 00:10:20.790 16:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:20.790 16:49:09 -- accel/accel.sh@21 -- # val=0x1 00:10:20.790 16:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:20.790 16:49:09 -- accel/accel.sh@21 -- # val= 00:10:20.790 16:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:20.790 16:49:09 -- accel/accel.sh@21 -- # val= 00:10:20.790 16:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:20.790 16:49:09 -- accel/accel.sh@21 -- # val=crc32c 00:10:20.790 16:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.790 16:49:09 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:20.790 16:49:09 -- accel/accel.sh@21 -- # val=32 00:10:20.790 16:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:20.790 16:49:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:20.790 16:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:20.790 16:49:09 -- accel/accel.sh@21 -- # val= 00:10:20.790 16:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:20.790 16:49:09 -- accel/accel.sh@21 -- # val=software 00:10:20.790 16:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.790 16:49:09 -- accel/accel.sh@23 -- # accel_module=software 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:20.790 16:49:09 -- accel/accel.sh@21 -- # val=32 00:10:20.790 16:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:20.790 16:49:09 -- accel/accel.sh@21 -- # val=32 00:10:20.790 16:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:20.790 16:49:09 -- accel/accel.sh@21 -- # val=1 00:10:20.790 16:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:20.790 16:49:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:20.790 16:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:20.790 16:49:09 -- accel/accel.sh@21 -- # val=Yes 00:10:20.790 16:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:20.790 16:49:09 -- accel/accel.sh@21 -- # val= 00:10:20.790 16:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:20.790 16:49:09 -- accel/accel.sh@21 -- # val= 00:10:20.790 16:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # IFS=: 00:10:20.790 16:49:09 -- accel/accel.sh@20 -- # read -r var val 00:10:22.692 16:49:11 -- accel/accel.sh@21 -- # val= 00:10:22.692 16:49:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.692 16:49:11 -- accel/accel.sh@20 -- # IFS=: 00:10:22.692 16:49:11 -- accel/accel.sh@20 -- # read -r var val 00:10:22.692 16:49:11 -- accel/accel.sh@21 -- # val= 00:10:22.692 16:49:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.692 16:49:11 -- accel/accel.sh@20 -- # IFS=: 00:10:22.692 16:49:11 -- accel/accel.sh@20 -- # read -r var val 00:10:22.692 16:49:11 -- accel/accel.sh@21 -- # val= 00:10:22.692 16:49:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.692 16:49:11 -- accel/accel.sh@20 -- # IFS=: 00:10:22.692 16:49:11 -- accel/accel.sh@20 -- # read -r var val 00:10:22.692 16:49:11 -- accel/accel.sh@21 -- # val= 00:10:22.692 16:49:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.692 16:49:11 -- accel/accel.sh@20 -- # IFS=: 00:10:22.692 16:49:11 -- accel/accel.sh@20 -- # read -r var val 00:10:22.692 16:49:11 -- accel/accel.sh@21 -- # val= 00:10:22.692 16:49:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.692 16:49:11 -- accel/accel.sh@20 -- # IFS=: 00:10:22.692 16:49:11 -- accel/accel.sh@20 -- # read -r var val 00:10:22.692 16:49:11 -- accel/accel.sh@21 -- # val= 00:10:22.692 16:49:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.692 16:49:11 -- accel/accel.sh@20 -- # IFS=: 00:10:22.692 16:49:11 -- accel/accel.sh@20 -- # read -r var val 00:10:22.692 ************************************ 00:10:22.692 END TEST accel_crc32c 00:10:22.692 ************************************ 00:10:22.692 16:49:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:22.692 16:49:11 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:22.692 16:49:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:22.692 00:10:22.692 real 0m4.686s 00:10:22.692 user 0m4.110s 00:10:22.692 sys 0m0.393s 00:10:22.692 16:49:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:22.692 16:49:11 -- common/autotest_common.sh@10 -- # set +x 00:10:22.692 16:49:11 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:22.692 16:49:11 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:22.692 16:49:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:22.692 16:49:11 -- common/autotest_common.sh@10 -- # set +x 00:10:22.692 ************************************ 00:10:22.692 START TEST accel_crc32c_C2 00:10:22.692 ************************************ 00:10:22.692 16:49:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:22.692 16:49:11 -- accel/accel.sh@16 -- # local accel_opc 00:10:22.692 16:49:11 -- accel/accel.sh@17 -- # local accel_module 00:10:22.692 16:49:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:22.692 16:49:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:22.692 16:49:11 -- accel/accel.sh@12 -- # build_accel_config 00:10:22.692 16:49:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:22.692 16:49:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:22.692 16:49:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:22.692 16:49:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:22.692 16:49:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:22.692 16:49:11 -- accel/accel.sh@41 -- # local IFS=, 00:10:22.692 16:49:11 -- accel/accel.sh@42 -- # jq -r . 00:10:22.692 [2024-11-05 16:49:11.534335] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:22.692 [2024-11-05 16:49:11.534727] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106416 ] 00:10:22.951 [2024-11-05 16:49:11.703728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.210 [2024-11-05 16:49:11.878922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.113 16:49:13 -- accel/accel.sh@18 -- # out=' 00:10:25.113 SPDK Configuration: 00:10:25.113 Core mask: 0x1 00:10:25.113 00:10:25.113 Accel Perf Configuration: 00:10:25.113 Workload Type: crc32c 00:10:25.113 CRC-32C seed: 0 00:10:25.113 Transfer size: 4096 bytes 00:10:25.113 Vector count 2 00:10:25.113 Module: software 00:10:25.113 Queue depth: 32 00:10:25.113 Allocate depth: 32 00:10:25.113 # threads/core: 1 00:10:25.113 Run time: 1 seconds 00:10:25.113 Verify: Yes 00:10:25.113 00:10:25.113 Running for 1 seconds... 00:10:25.113 00:10:25.113 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:25.113 ------------------------------------------------------------------------------------ 00:10:25.113 0,0 372896/s 2913 MiB/s 0 0 00:10:25.113 ==================================================================================== 00:10:25.113 Total 372896/s 1456 MiB/s 0 0' 00:10:25.113 16:49:13 -- accel/accel.sh@20 -- # IFS=: 00:10:25.113 16:49:13 -- accel/accel.sh@20 -- # read -r var val 00:10:25.113 16:49:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:25.113 16:49:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:25.113 16:49:13 -- accel/accel.sh@12 -- # build_accel_config 00:10:25.113 16:49:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:25.113 16:49:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.113 16:49:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.113 16:49:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:25.113 16:49:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:25.113 16:49:13 -- accel/accel.sh@41 -- # local IFS=, 00:10:25.113 16:49:13 -- accel/accel.sh@42 -- # jq -r . 00:10:25.113 [2024-11-05 16:49:13.851560] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:25.113 [2024-11-05 16:49:13.851896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106453 ] 00:10:25.372 [2024-11-05 16:49:14.005407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.372 [2024-11-05 16:49:14.178109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.630 16:49:14 -- accel/accel.sh@21 -- # val= 00:10:25.630 16:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.630 16:49:14 -- accel/accel.sh@20 -- # IFS=: 00:10:25.630 16:49:14 -- accel/accel.sh@20 -- # read -r var val 00:10:25.630 16:49:14 -- accel/accel.sh@21 -- # val= 00:10:25.630 16:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.630 16:49:14 -- accel/accel.sh@20 -- # IFS=: 00:10:25.630 16:49:14 -- accel/accel.sh@20 -- # read -r var val 00:10:25.630 16:49:14 -- accel/accel.sh@21 -- # val=0x1 00:10:25.630 16:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.630 16:49:14 -- accel/accel.sh@20 -- # IFS=: 00:10:25.630 16:49:14 -- accel/accel.sh@20 -- # read -r var val 00:10:25.630 16:49:14 -- accel/accel.sh@21 -- # val= 00:10:25.630 16:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.630 16:49:14 -- accel/accel.sh@20 -- # IFS=: 00:10:25.630 16:49:14 -- accel/accel.sh@20 -- # read -r var val 00:10:25.630 16:49:14 -- accel/accel.sh@21 -- # val= 00:10:25.630 16:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.630 16:49:14 -- accel/accel.sh@20 -- # IFS=: 00:10:25.630 16:49:14 -- accel/accel.sh@20 -- # read -r var val 00:10:25.630 16:49:14 -- accel/accel.sh@21 -- # val=crc32c 00:10:25.630 16:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.630 16:49:14 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:25.630 16:49:14 -- accel/accel.sh@20 -- # IFS=: 00:10:25.630 16:49:14 -- accel/accel.sh@20 -- # read -r var val 00:10:25.630 16:49:14 -- accel/accel.sh@21 -- # val=0 00:10:25.630 16:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.630 16:49:14 -- accel/accel.sh@20 -- # IFS=: 00:10:25.630 16:49:14 -- accel/accel.sh@20 -- # read -r var val 00:10:25.631 16:49:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:25.631 16:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # IFS=: 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # read -r var val 00:10:25.631 16:49:14 -- accel/accel.sh@21 -- # val= 00:10:25.631 16:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # IFS=: 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # read -r var val 00:10:25.631 16:49:14 -- accel/accel.sh@21 -- # val=software 00:10:25.631 16:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.631 16:49:14 -- accel/accel.sh@23 -- # accel_module=software 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # IFS=: 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # read -r var val 00:10:25.631 16:49:14 -- accel/accel.sh@21 -- # val=32 00:10:25.631 16:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # IFS=: 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # read -r var val 00:10:25.631 16:49:14 -- accel/accel.sh@21 -- # val=32 00:10:25.631 16:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # IFS=: 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # read -r var val 00:10:25.631 16:49:14 -- accel/accel.sh@21 -- # val=1 00:10:25.631 16:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # IFS=: 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # read -r var val 00:10:25.631 16:49:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:25.631 16:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # IFS=: 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # read -r var val 00:10:25.631 16:49:14 -- accel/accel.sh@21 -- # val=Yes 00:10:25.631 16:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # IFS=: 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # read -r var val 00:10:25.631 16:49:14 -- accel/accel.sh@21 -- # val= 00:10:25.631 16:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # IFS=: 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # read -r var val 00:10:25.631 16:49:14 -- accel/accel.sh@21 -- # val= 00:10:25.631 16:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # IFS=: 00:10:25.631 16:49:14 -- accel/accel.sh@20 -- # read -r var val 00:10:27.532 16:49:16 -- accel/accel.sh@21 -- # val= 00:10:27.532 16:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.532 16:49:16 -- accel/accel.sh@20 -- # IFS=: 00:10:27.532 16:49:16 -- accel/accel.sh@20 -- # read -r var val 00:10:27.532 16:49:16 -- accel/accel.sh@21 -- # val= 00:10:27.532 16:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.532 16:49:16 -- accel/accel.sh@20 -- # IFS=: 00:10:27.532 16:49:16 -- accel/accel.sh@20 -- # read -r var val 00:10:27.532 16:49:16 -- accel/accel.sh@21 -- # val= 00:10:27.532 16:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.532 16:49:16 -- accel/accel.sh@20 -- # IFS=: 00:10:27.532 16:49:16 -- accel/accel.sh@20 -- # read -r var val 00:10:27.532 16:49:16 -- accel/accel.sh@21 -- # val= 00:10:27.532 16:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.532 16:49:16 -- accel/accel.sh@20 -- # IFS=: 00:10:27.532 16:49:16 -- accel/accel.sh@20 -- # read -r var val 00:10:27.532 16:49:16 -- accel/accel.sh@21 -- # val= 00:10:27.532 16:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.532 16:49:16 -- accel/accel.sh@20 -- # IFS=: 00:10:27.532 16:49:16 -- accel/accel.sh@20 -- # read -r var val 00:10:27.532 16:49:16 -- accel/accel.sh@21 -- # val= 00:10:27.532 16:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.532 16:49:16 -- accel/accel.sh@20 -- # IFS=: 00:10:27.532 16:49:16 -- accel/accel.sh@20 -- # read -r var val 00:10:27.532 ************************************ 00:10:27.532 END TEST accel_crc32c_C2 00:10:27.532 ************************************ 00:10:27.532 16:49:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:27.532 16:49:16 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:27.532 16:49:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:27.532 00:10:27.532 real 0m4.646s 00:10:27.532 user 0m4.111s 00:10:27.532 sys 0m0.358s 00:10:27.532 16:49:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:27.532 16:49:16 -- common/autotest_common.sh@10 -- # set +x 00:10:27.532 16:49:16 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:27.532 16:49:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:27.532 16:49:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:27.532 16:49:16 -- common/autotest_common.sh@10 -- # set +x 00:10:27.532 ************************************ 00:10:27.532 START TEST accel_copy 00:10:27.532 ************************************ 00:10:27.532 16:49:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:10:27.532 16:49:16 -- accel/accel.sh@16 -- # local accel_opc 00:10:27.532 16:49:16 -- accel/accel.sh@17 -- # local accel_module 00:10:27.532 16:49:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:27.532 16:49:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:27.532 16:49:16 -- accel/accel.sh@12 -- # build_accel_config 00:10:27.532 16:49:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:27.532 16:49:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.532 16:49:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.532 16:49:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:27.532 16:49:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:27.532 16:49:16 -- accel/accel.sh@41 -- # local IFS=, 00:10:27.532 16:49:16 -- accel/accel.sh@42 -- # jq -r . 00:10:27.532 [2024-11-05 16:49:16.232797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:27.532 [2024-11-05 16:49:16.233137] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106503 ] 00:10:27.532 [2024-11-05 16:49:16.403103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.791 [2024-11-05 16:49:16.587375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.700 16:49:18 -- accel/accel.sh@18 -- # out=' 00:10:29.700 SPDK Configuration: 00:10:29.700 Core mask: 0x1 00:10:29.700 00:10:29.700 Accel Perf Configuration: 00:10:29.700 Workload Type: copy 00:10:29.700 Transfer size: 4096 bytes 00:10:29.700 Vector count 1 00:10:29.700 Module: software 00:10:29.700 Queue depth: 32 00:10:29.700 Allocate depth: 32 00:10:29.700 # threads/core: 1 00:10:29.700 Run time: 1 seconds 00:10:29.700 Verify: Yes 00:10:29.700 00:10:29.700 Running for 1 seconds... 00:10:29.700 00:10:29.700 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:29.700 ------------------------------------------------------------------------------------ 00:10:29.700 0,0 289600/s 1131 MiB/s 0 0 00:10:29.700 ==================================================================================== 00:10:29.700 Total 289600/s 1131 MiB/s 0 0' 00:10:29.700 16:49:18 -- accel/accel.sh@20 -- # IFS=: 00:10:29.700 16:49:18 -- accel/accel.sh@20 -- # read -r var val 00:10:29.700 16:49:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:29.700 16:49:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:29.700 16:49:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:29.700 16:49:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:29.700 16:49:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.700 16:49:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.700 16:49:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:29.700 16:49:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:29.700 16:49:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:29.700 16:49:18 -- accel/accel.sh@42 -- # jq -r . 00:10:29.700 [2024-11-05 16:49:18.573363] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:29.700 [2024-11-05 16:49:18.573759] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106540 ] 00:10:29.959 [2024-11-05 16:49:18.746352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.218 [2024-11-05 16:49:18.946074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.476 16:49:19 -- accel/accel.sh@21 -- # val= 00:10:30.476 16:49:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.476 16:49:19 -- accel/accel.sh@20 -- # IFS=: 00:10:30.476 16:49:19 -- accel/accel.sh@20 -- # read -r var val 00:10:30.476 16:49:19 -- accel/accel.sh@21 -- # val= 00:10:30.477 16:49:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # IFS=: 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # read -r var val 00:10:30.477 16:49:19 -- accel/accel.sh@21 -- # val=0x1 00:10:30.477 16:49:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # IFS=: 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # read -r var val 00:10:30.477 16:49:19 -- accel/accel.sh@21 -- # val= 00:10:30.477 16:49:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # IFS=: 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # read -r var val 00:10:30.477 16:49:19 -- accel/accel.sh@21 -- # val= 00:10:30.477 16:49:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # IFS=: 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # read -r var val 00:10:30.477 16:49:19 -- accel/accel.sh@21 -- # val=copy 00:10:30.477 16:49:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.477 16:49:19 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # IFS=: 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # read -r var val 00:10:30.477 16:49:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:30.477 16:49:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # IFS=: 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # read -r var val 00:10:30.477 16:49:19 -- accel/accel.sh@21 -- # val= 00:10:30.477 16:49:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # IFS=: 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # read -r var val 00:10:30.477 16:49:19 -- accel/accel.sh@21 -- # val=software 00:10:30.477 16:49:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.477 16:49:19 -- accel/accel.sh@23 -- # accel_module=software 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # IFS=: 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # read -r var val 00:10:30.477 16:49:19 -- accel/accel.sh@21 -- # val=32 00:10:30.477 16:49:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # IFS=: 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # read -r var val 00:10:30.477 16:49:19 -- accel/accel.sh@21 -- # val=32 00:10:30.477 16:49:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # IFS=: 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # read -r var val 00:10:30.477 16:49:19 -- accel/accel.sh@21 -- # val=1 00:10:30.477 16:49:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # IFS=: 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # read -r var val 00:10:30.477 16:49:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:30.477 16:49:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # IFS=: 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # read -r var val 00:10:30.477 16:49:19 -- accel/accel.sh@21 -- # val=Yes 00:10:30.477 16:49:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # IFS=: 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # read -r var val 00:10:30.477 16:49:19 -- accel/accel.sh@21 -- # val= 00:10:30.477 16:49:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # IFS=: 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # read -r var val 00:10:30.477 16:49:19 -- accel/accel.sh@21 -- # val= 00:10:30.477 16:49:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # IFS=: 00:10:30.477 16:49:19 -- accel/accel.sh@20 -- # read -r var val 00:10:32.381 16:49:20 -- accel/accel.sh@21 -- # val= 00:10:32.381 16:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.381 16:49:20 -- accel/accel.sh@20 -- # IFS=: 00:10:32.381 16:49:20 -- accel/accel.sh@20 -- # read -r var val 00:10:32.381 16:49:20 -- accel/accel.sh@21 -- # val= 00:10:32.381 16:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.381 16:49:20 -- accel/accel.sh@20 -- # IFS=: 00:10:32.381 16:49:20 -- accel/accel.sh@20 -- # read -r var val 00:10:32.381 16:49:20 -- accel/accel.sh@21 -- # val= 00:10:32.381 16:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.381 16:49:20 -- accel/accel.sh@20 -- # IFS=: 00:10:32.381 16:49:20 -- accel/accel.sh@20 -- # read -r var val 00:10:32.381 16:49:20 -- accel/accel.sh@21 -- # val= 00:10:32.381 16:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.381 16:49:20 -- accel/accel.sh@20 -- # IFS=: 00:10:32.381 16:49:20 -- accel/accel.sh@20 -- # read -r var val 00:10:32.381 16:49:20 -- accel/accel.sh@21 -- # val= 00:10:32.381 16:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.381 16:49:20 -- accel/accel.sh@20 -- # IFS=: 00:10:32.381 16:49:20 -- accel/accel.sh@20 -- # read -r var val 00:10:32.381 16:49:20 -- accel/accel.sh@21 -- # val= 00:10:32.381 16:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.381 16:49:20 -- accel/accel.sh@20 -- # IFS=: 00:10:32.381 16:49:20 -- accel/accel.sh@20 -- # read -r var val 00:10:32.381 ************************************ 00:10:32.381 END TEST accel_copy 00:10:32.381 ************************************ 00:10:32.381 16:49:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:32.381 16:49:20 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:10:32.381 16:49:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:32.381 00:10:32.381 real 0m4.715s 00:10:32.381 user 0m4.175s 00:10:32.381 sys 0m0.362s 00:10:32.381 16:49:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:32.381 16:49:20 -- common/autotest_common.sh@10 -- # set +x 00:10:32.381 16:49:20 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:32.381 16:49:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:32.381 16:49:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.381 16:49:20 -- common/autotest_common.sh@10 -- # set +x 00:10:32.381 ************************************ 00:10:32.381 START TEST accel_fill 00:10:32.381 ************************************ 00:10:32.381 16:49:20 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:32.381 16:49:20 -- accel/accel.sh@16 -- # local accel_opc 00:10:32.381 16:49:20 -- accel/accel.sh@17 -- # local accel_module 00:10:32.381 16:49:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:32.381 16:49:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:32.381 16:49:20 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.381 16:49:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.381 16:49:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.381 16:49:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.381 16:49:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.381 16:49:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.381 16:49:20 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.381 16:49:20 -- accel/accel.sh@42 -- # jq -r . 00:10:32.381 [2024-11-05 16:49:21.000775] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:32.381 [2024-11-05 16:49:21.001151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106595 ] 00:10:32.381 [2024-11-05 16:49:21.171576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.640 [2024-11-05 16:49:21.340726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.543 16:49:23 -- accel/accel.sh@18 -- # out=' 00:10:34.543 SPDK Configuration: 00:10:34.543 Core mask: 0x1 00:10:34.543 00:10:34.543 Accel Perf Configuration: 00:10:34.543 Workload Type: fill 00:10:34.543 Fill pattern: 0x80 00:10:34.543 Transfer size: 4096 bytes 00:10:34.543 Vector count 1 00:10:34.543 Module: software 00:10:34.543 Queue depth: 64 00:10:34.543 Allocate depth: 64 00:10:34.543 # threads/core: 1 00:10:34.543 Run time: 1 seconds 00:10:34.543 Verify: Yes 00:10:34.543 00:10:34.543 Running for 1 seconds... 00:10:34.543 00:10:34.543 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:34.543 ------------------------------------------------------------------------------------ 00:10:34.543 0,0 433536/s 1693 MiB/s 0 0 00:10:34.543 ==================================================================================== 00:10:34.543 Total 433536/s 1693 MiB/s 0 0' 00:10:34.543 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:34.543 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:34.543 16:49:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:34.543 16:49:23 -- accel/accel.sh@12 -- # build_accel_config 00:10:34.543 16:49:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:34.543 16:49:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:34.543 16:49:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.543 16:49:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.543 16:49:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:34.543 16:49:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:34.543 16:49:23 -- accel/accel.sh@41 -- # local IFS=, 00:10:34.543 16:49:23 -- accel/accel.sh@42 -- # jq -r . 00:10:34.543 [2024-11-05 16:49:23.323593] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:34.543 [2024-11-05 16:49:23.324501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106632 ] 00:10:34.801 [2024-11-05 16:49:23.491246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.801 [2024-11-05 16:49:23.680903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.059 16:49:23 -- accel/accel.sh@21 -- # val= 00:10:35.059 16:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:35.059 16:49:23 -- accel/accel.sh@21 -- # val= 00:10:35.059 16:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:35.059 16:49:23 -- accel/accel.sh@21 -- # val=0x1 00:10:35.059 16:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:35.059 16:49:23 -- accel/accel.sh@21 -- # val= 00:10:35.059 16:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:35.059 16:49:23 -- accel/accel.sh@21 -- # val= 00:10:35.059 16:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:35.059 16:49:23 -- accel/accel.sh@21 -- # val=fill 00:10:35.059 16:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.059 16:49:23 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:35.059 16:49:23 -- accel/accel.sh@21 -- # val=0x80 00:10:35.059 16:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:35.059 16:49:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:35.059 16:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:35.059 16:49:23 -- accel/accel.sh@21 -- # val= 00:10:35.059 16:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:35.059 16:49:23 -- accel/accel.sh@21 -- # val=software 00:10:35.059 16:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.059 16:49:23 -- accel/accel.sh@23 -- # accel_module=software 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:35.059 16:49:23 -- accel/accel.sh@21 -- # val=64 00:10:35.059 16:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:35.059 16:49:23 -- accel/accel.sh@21 -- # val=64 00:10:35.059 16:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:35.059 16:49:23 -- accel/accel.sh@21 -- # val=1 00:10:35.059 16:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:35.059 16:49:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:35.059 16:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:35.059 16:49:23 -- accel/accel.sh@21 -- # val=Yes 00:10:35.059 16:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:35.059 16:49:23 -- accel/accel.sh@21 -- # val= 00:10:35.059 16:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:35.059 16:49:23 -- accel/accel.sh@21 -- # val= 00:10:35.059 16:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # IFS=: 00:10:35.059 16:49:23 -- accel/accel.sh@20 -- # read -r var val 00:10:36.962 16:49:25 -- accel/accel.sh@21 -- # val= 00:10:36.962 16:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.962 16:49:25 -- accel/accel.sh@20 -- # IFS=: 00:10:36.962 16:49:25 -- accel/accel.sh@20 -- # read -r var val 00:10:36.962 16:49:25 -- accel/accel.sh@21 -- # val= 00:10:36.962 16:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.962 16:49:25 -- accel/accel.sh@20 -- # IFS=: 00:10:36.962 16:49:25 -- accel/accel.sh@20 -- # read -r var val 00:10:36.962 16:49:25 -- accel/accel.sh@21 -- # val= 00:10:36.962 16:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.962 16:49:25 -- accel/accel.sh@20 -- # IFS=: 00:10:36.962 16:49:25 -- accel/accel.sh@20 -- # read -r var val 00:10:36.962 16:49:25 -- accel/accel.sh@21 -- # val= 00:10:36.962 16:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.962 16:49:25 -- accel/accel.sh@20 -- # IFS=: 00:10:36.962 16:49:25 -- accel/accel.sh@20 -- # read -r var val 00:10:36.962 16:49:25 -- accel/accel.sh@21 -- # val= 00:10:36.962 16:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.962 16:49:25 -- accel/accel.sh@20 -- # IFS=: 00:10:36.962 16:49:25 -- accel/accel.sh@20 -- # read -r var val 00:10:36.962 16:49:25 -- accel/accel.sh@21 -- # val= 00:10:36.962 16:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.962 16:49:25 -- accel/accel.sh@20 -- # IFS=: 00:10:36.962 16:49:25 -- accel/accel.sh@20 -- # read -r var val 00:10:36.962 ************************************ 00:10:36.962 END TEST accel_fill 00:10:36.962 ************************************ 00:10:36.962 16:49:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:36.962 16:49:25 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:36.962 16:49:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:36.962 00:10:36.962 real 0m4.681s 00:10:36.962 user 0m4.093s 00:10:36.962 sys 0m0.406s 00:10:36.962 16:49:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:36.962 16:49:25 -- common/autotest_common.sh@10 -- # set +x 00:10:36.962 16:49:25 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:36.962 16:49:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:36.962 16:49:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:36.962 16:49:25 -- common/autotest_common.sh@10 -- # set +x 00:10:36.962 ************************************ 00:10:36.962 START TEST accel_copy_crc32c 00:10:36.962 ************************************ 00:10:36.962 16:49:25 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:10:36.962 16:49:25 -- accel/accel.sh@16 -- # local accel_opc 00:10:36.962 16:49:25 -- accel/accel.sh@17 -- # local accel_module 00:10:36.962 16:49:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:36.962 16:49:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:36.962 16:49:25 -- accel/accel.sh@12 -- # build_accel_config 00:10:36.962 16:49:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:36.962 16:49:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:36.962 16:49:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:36.962 16:49:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:36.962 16:49:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:36.962 16:49:25 -- accel/accel.sh@41 -- # local IFS=, 00:10:36.962 16:49:25 -- accel/accel.sh@42 -- # jq -r . 00:10:36.962 [2024-11-05 16:49:25.738398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:36.962 [2024-11-05 16:49:25.738971] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106677 ] 00:10:37.221 [2024-11-05 16:49:25.907662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.221 [2024-11-05 16:49:26.093880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.123 16:49:27 -- accel/accel.sh@18 -- # out=' 00:10:39.123 SPDK Configuration: 00:10:39.123 Core mask: 0x1 00:10:39.123 00:10:39.123 Accel Perf Configuration: 00:10:39.123 Workload Type: copy_crc32c 00:10:39.123 CRC-32C seed: 0 00:10:39.123 Vector size: 4096 bytes 00:10:39.123 Transfer size: 4096 bytes 00:10:39.123 Vector count 1 00:10:39.123 Module: software 00:10:39.123 Queue depth: 32 00:10:39.123 Allocate depth: 32 00:10:39.123 # threads/core: 1 00:10:39.123 Run time: 1 seconds 00:10:39.123 Verify: Yes 00:10:39.123 00:10:39.123 Running for 1 seconds... 00:10:39.123 00:10:39.123 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:39.123 ------------------------------------------------------------------------------------ 00:10:39.123 0,0 256928/s 1003 MiB/s 0 0 00:10:39.123 ==================================================================================== 00:10:39.123 Total 256928/s 1003 MiB/s 0 0' 00:10:39.123 16:49:27 -- accel/accel.sh@20 -- # IFS=: 00:10:39.123 16:49:27 -- accel/accel.sh@20 -- # read -r var val 00:10:39.123 16:49:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:39.123 16:49:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:39.123 16:49:27 -- accel/accel.sh@12 -- # build_accel_config 00:10:39.123 16:49:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:39.123 16:49:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:39.123 16:49:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:39.123 16:49:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:39.123 16:49:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:39.123 16:49:28 -- accel/accel.sh@41 -- # local IFS=, 00:10:39.123 16:49:28 -- accel/accel.sh@42 -- # jq -r . 00:10:39.382 [2024-11-05 16:49:28.042214] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:39.382 [2024-11-05 16:49:28.042573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106715 ] 00:10:39.382 [2024-11-05 16:49:28.211085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.641 [2024-11-05 16:49:28.385408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.899 16:49:28 -- accel/accel.sh@21 -- # val= 00:10:39.899 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.899 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.899 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:39.899 16:49:28 -- accel/accel.sh@21 -- # val= 00:10:39.899 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.899 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.899 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:39.899 16:49:28 -- accel/accel.sh@21 -- # val=0x1 00:10:39.899 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.899 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.899 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:39.900 16:49:28 -- accel/accel.sh@21 -- # val= 00:10:39.900 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:39.900 16:49:28 -- accel/accel.sh@21 -- # val= 00:10:39.900 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:39.900 16:49:28 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:39.900 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.900 16:49:28 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:39.900 16:49:28 -- accel/accel.sh@21 -- # val=0 00:10:39.900 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:39.900 16:49:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:39.900 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:39.900 16:49:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:39.900 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:39.900 16:49:28 -- accel/accel.sh@21 -- # val= 00:10:39.900 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:39.900 16:49:28 -- accel/accel.sh@21 -- # val=software 00:10:39.900 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.900 16:49:28 -- accel/accel.sh@23 -- # accel_module=software 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:39.900 16:49:28 -- accel/accel.sh@21 -- # val=32 00:10:39.900 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:39.900 16:49:28 -- accel/accel.sh@21 -- # val=32 00:10:39.900 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:39.900 16:49:28 -- accel/accel.sh@21 -- # val=1 00:10:39.900 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:39.900 16:49:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:39.900 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:39.900 16:49:28 -- accel/accel.sh@21 -- # val=Yes 00:10:39.900 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:39.900 16:49:28 -- accel/accel.sh@21 -- # val= 00:10:39.900 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:39.900 16:49:28 -- accel/accel.sh@21 -- # val= 00:10:39.900 16:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # IFS=: 00:10:39.900 16:49:28 -- accel/accel.sh@20 -- # read -r var val 00:10:41.811 16:49:30 -- accel/accel.sh@21 -- # val= 00:10:41.811 16:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.811 16:49:30 -- accel/accel.sh@20 -- # IFS=: 00:10:41.811 16:49:30 -- accel/accel.sh@20 -- # read -r var val 00:10:41.811 16:49:30 -- accel/accel.sh@21 -- # val= 00:10:41.811 16:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.811 16:49:30 -- accel/accel.sh@20 -- # IFS=: 00:10:41.811 16:49:30 -- accel/accel.sh@20 -- # read -r var val 00:10:41.811 16:49:30 -- accel/accel.sh@21 -- # val= 00:10:41.811 16:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.811 16:49:30 -- accel/accel.sh@20 -- # IFS=: 00:10:41.811 16:49:30 -- accel/accel.sh@20 -- # read -r var val 00:10:41.811 16:49:30 -- accel/accel.sh@21 -- # val= 00:10:41.811 16:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.811 16:49:30 -- accel/accel.sh@20 -- # IFS=: 00:10:41.811 16:49:30 -- accel/accel.sh@20 -- # read -r var val 00:10:41.811 16:49:30 -- accel/accel.sh@21 -- # val= 00:10:41.811 16:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.811 16:49:30 -- accel/accel.sh@20 -- # IFS=: 00:10:41.811 16:49:30 -- accel/accel.sh@20 -- # read -r var val 00:10:41.811 16:49:30 -- accel/accel.sh@21 -- # val= 00:10:41.811 16:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.811 16:49:30 -- accel/accel.sh@20 -- # IFS=: 00:10:41.811 16:49:30 -- accel/accel.sh@20 -- # read -r var val 00:10:41.811 ************************************ 00:10:41.811 END TEST accel_copy_crc32c 00:10:41.811 ************************************ 00:10:41.811 16:49:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:41.812 16:49:30 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:41.812 16:49:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:41.812 00:10:41.812 real 0m4.623s 00:10:41.812 user 0m4.066s 00:10:41.812 sys 0m0.373s 00:10:41.812 16:49:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:41.812 16:49:30 -- common/autotest_common.sh@10 -- # set +x 00:10:41.812 16:49:30 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:41.812 16:49:30 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:41.812 16:49:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:41.812 16:49:30 -- common/autotest_common.sh@10 -- # set +x 00:10:41.812 ************************************ 00:10:41.812 START TEST accel_copy_crc32c_C2 00:10:41.812 ************************************ 00:10:41.812 16:49:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:41.812 16:49:30 -- accel/accel.sh@16 -- # local accel_opc 00:10:41.812 16:49:30 -- accel/accel.sh@17 -- # local accel_module 00:10:41.812 16:49:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:41.812 16:49:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:41.812 16:49:30 -- accel/accel.sh@12 -- # build_accel_config 00:10:41.812 16:49:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:41.812 16:49:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:41.812 16:49:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:41.812 16:49:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:41.812 16:49:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:41.812 16:49:30 -- accel/accel.sh@41 -- # local IFS=, 00:10:41.812 16:49:30 -- accel/accel.sh@42 -- # jq -r . 00:10:41.812 [2024-11-05 16:49:30.409565] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:41.812 [2024-11-05 16:49:30.410103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106767 ] 00:10:41.812 [2024-11-05 16:49:30.576441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.070 [2024-11-05 16:49:30.749943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.975 16:49:32 -- accel/accel.sh@18 -- # out=' 00:10:43.975 SPDK Configuration: 00:10:43.975 Core mask: 0x1 00:10:43.975 00:10:43.975 Accel Perf Configuration: 00:10:43.975 Workload Type: copy_crc32c 00:10:43.975 CRC-32C seed: 0 00:10:43.975 Vector size: 4096 bytes 00:10:43.975 Transfer size: 8192 bytes 00:10:43.975 Vector count 2 00:10:43.975 Module: software 00:10:43.975 Queue depth: 32 00:10:43.975 Allocate depth: 32 00:10:43.975 # threads/core: 1 00:10:43.975 Run time: 1 seconds 00:10:43.975 Verify: Yes 00:10:43.975 00:10:43.975 Running for 1 seconds... 00:10:43.975 00:10:43.975 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:43.975 ------------------------------------------------------------------------------------ 00:10:43.975 0,0 179200/s 1400 MiB/s 0 0 00:10:43.975 ==================================================================================== 00:10:43.975 Total 179200/s 700 MiB/s 0 0' 00:10:43.975 16:49:32 -- accel/accel.sh@20 -- # IFS=: 00:10:43.975 16:49:32 -- accel/accel.sh@20 -- # read -r var val 00:10:43.975 16:49:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:43.975 16:49:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:43.975 16:49:32 -- accel/accel.sh@12 -- # build_accel_config 00:10:43.975 16:49:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:43.975 16:49:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:43.975 16:49:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:43.975 16:49:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:43.975 16:49:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:43.975 16:49:32 -- accel/accel.sh@41 -- # local IFS=, 00:10:43.975 16:49:32 -- accel/accel.sh@42 -- # jq -r . 00:10:43.975 [2024-11-05 16:49:32.679406] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:43.975 [2024-11-05 16:49:32.679756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106802 ] 00:10:43.975 [2024-11-05 16:49:32.845372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.233 [2024-11-05 16:49:33.017369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.492 16:49:33 -- accel/accel.sh@21 -- # val= 00:10:44.492 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:44.492 16:49:33 -- accel/accel.sh@21 -- # val= 00:10:44.492 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:44.492 16:49:33 -- accel/accel.sh@21 -- # val=0x1 00:10:44.492 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:44.492 16:49:33 -- accel/accel.sh@21 -- # val= 00:10:44.492 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:44.492 16:49:33 -- accel/accel.sh@21 -- # val= 00:10:44.492 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:44.492 16:49:33 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:44.492 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.492 16:49:33 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:44.492 16:49:33 -- accel/accel.sh@21 -- # val=0 00:10:44.492 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:44.492 16:49:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:44.492 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:44.492 16:49:33 -- accel/accel.sh@21 -- # val='8192 bytes' 00:10:44.492 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:44.492 16:49:33 -- accel/accel.sh@21 -- # val= 00:10:44.492 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:44.492 16:49:33 -- accel/accel.sh@21 -- # val=software 00:10:44.492 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.492 16:49:33 -- accel/accel.sh@23 -- # accel_module=software 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:44.492 16:49:33 -- accel/accel.sh@21 -- # val=32 00:10:44.492 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:44.492 16:49:33 -- accel/accel.sh@21 -- # val=32 00:10:44.492 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:44.492 16:49:33 -- accel/accel.sh@21 -- # val=1 00:10:44.492 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.492 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:44.492 16:49:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:44.493 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.493 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.493 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:44.493 16:49:33 -- accel/accel.sh@21 -- # val=Yes 00:10:44.493 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.493 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.493 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:44.493 16:49:33 -- accel/accel.sh@21 -- # val= 00:10:44.493 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.493 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.493 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:44.493 16:49:33 -- accel/accel.sh@21 -- # val= 00:10:44.493 16:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.493 16:49:33 -- accel/accel.sh@20 -- # IFS=: 00:10:44.493 16:49:33 -- accel/accel.sh@20 -- # read -r var val 00:10:46.396 16:49:34 -- accel/accel.sh@21 -- # val= 00:10:46.396 16:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.396 16:49:34 -- accel/accel.sh@20 -- # IFS=: 00:10:46.396 16:49:34 -- accel/accel.sh@20 -- # read -r var val 00:10:46.396 16:49:34 -- accel/accel.sh@21 -- # val= 00:10:46.396 16:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.396 16:49:34 -- accel/accel.sh@20 -- # IFS=: 00:10:46.396 16:49:34 -- accel/accel.sh@20 -- # read -r var val 00:10:46.396 16:49:34 -- accel/accel.sh@21 -- # val= 00:10:46.396 16:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.396 16:49:34 -- accel/accel.sh@20 -- # IFS=: 00:10:46.396 16:49:34 -- accel/accel.sh@20 -- # read -r var val 00:10:46.396 16:49:34 -- accel/accel.sh@21 -- # val= 00:10:46.396 16:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.396 16:49:34 -- accel/accel.sh@20 -- # IFS=: 00:10:46.396 16:49:34 -- accel/accel.sh@20 -- # read -r var val 00:10:46.396 16:49:34 -- accel/accel.sh@21 -- # val= 00:10:46.396 16:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.396 16:49:34 -- accel/accel.sh@20 -- # IFS=: 00:10:46.396 16:49:34 -- accel/accel.sh@20 -- # read -r var val 00:10:46.396 16:49:34 -- accel/accel.sh@21 -- # val= 00:10:46.396 16:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.396 16:49:34 -- accel/accel.sh@20 -- # IFS=: 00:10:46.396 16:49:34 -- accel/accel.sh@20 -- # read -r var val 00:10:46.396 ************************************ 00:10:46.396 END TEST accel_copy_crc32c_C2 00:10:46.396 ************************************ 00:10:46.396 16:49:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:46.396 16:49:34 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:46.396 16:49:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:46.396 00:10:46.396 real 0m4.555s 00:10:46.396 user 0m4.022s 00:10:46.396 sys 0m0.356s 00:10:46.396 16:49:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:46.396 16:49:34 -- common/autotest_common.sh@10 -- # set +x 00:10:46.396 16:49:34 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:46.396 16:49:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:46.396 16:49:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:46.396 16:49:34 -- common/autotest_common.sh@10 -- # set +x 00:10:46.396 ************************************ 00:10:46.396 START TEST accel_dualcast 00:10:46.396 ************************************ 00:10:46.396 16:49:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:10:46.396 16:49:34 -- accel/accel.sh@16 -- # local accel_opc 00:10:46.396 16:49:34 -- accel/accel.sh@17 -- # local accel_module 00:10:46.396 16:49:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:10:46.396 16:49:34 -- accel/accel.sh@12 -- # build_accel_config 00:10:46.396 16:49:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:46.396 16:49:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:46.396 16:49:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.396 16:49:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.396 16:49:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:46.396 16:49:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:46.396 16:49:34 -- accel/accel.sh@41 -- # local IFS=, 00:10:46.396 16:49:34 -- accel/accel.sh@42 -- # jq -r . 00:10:46.396 [2024-11-05 16:49:35.009996] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:46.396 [2024-11-05 16:49:35.010752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106853 ] 00:10:46.396 [2024-11-05 16:49:35.178546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.654 [2024-11-05 16:49:35.345173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.555 16:49:37 -- accel/accel.sh@18 -- # out=' 00:10:48.555 SPDK Configuration: 00:10:48.555 Core mask: 0x1 00:10:48.555 00:10:48.555 Accel Perf Configuration: 00:10:48.555 Workload Type: dualcast 00:10:48.555 Transfer size: 4096 bytes 00:10:48.555 Vector count 1 00:10:48.555 Module: software 00:10:48.555 Queue depth: 32 00:10:48.555 Allocate depth: 32 00:10:48.555 # threads/core: 1 00:10:48.555 Run time: 1 seconds 00:10:48.555 Verify: Yes 00:10:48.555 00:10:48.555 Running for 1 seconds... 00:10:48.555 00:10:48.555 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:48.555 ------------------------------------------------------------------------------------ 00:10:48.555 0,0 329984/s 1289 MiB/s 0 0 00:10:48.555 ==================================================================================== 00:10:48.555 Total 329984/s 1289 MiB/s 0 0' 00:10:48.555 16:49:37 -- accel/accel.sh@20 -- # IFS=: 00:10:48.555 16:49:37 -- accel/accel.sh@20 -- # read -r var val 00:10:48.555 16:49:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:48.555 16:49:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:48.555 16:49:37 -- accel/accel.sh@12 -- # build_accel_config 00:10:48.555 16:49:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:48.555 16:49:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.555 16:49:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.555 16:49:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:48.555 16:49:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:48.555 16:49:37 -- accel/accel.sh@41 -- # local IFS=, 00:10:48.555 16:49:37 -- accel/accel.sh@42 -- # jq -r . 00:10:48.555 [2024-11-05 16:49:37.288686] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:48.555 [2024-11-05 16:49:37.289675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106887 ] 00:10:48.814 [2024-11-05 16:49:37.457382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.814 [2024-11-05 16:49:37.631906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.073 16:49:37 -- accel/accel.sh@21 -- # val= 00:10:49.073 16:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # IFS=: 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # read -r var val 00:10:49.073 16:49:37 -- accel/accel.sh@21 -- # val= 00:10:49.073 16:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # IFS=: 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # read -r var val 00:10:49.073 16:49:37 -- accel/accel.sh@21 -- # val=0x1 00:10:49.073 16:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # IFS=: 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # read -r var val 00:10:49.073 16:49:37 -- accel/accel.sh@21 -- # val= 00:10:49.073 16:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # IFS=: 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # read -r var val 00:10:49.073 16:49:37 -- accel/accel.sh@21 -- # val= 00:10:49.073 16:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # IFS=: 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # read -r var val 00:10:49.073 16:49:37 -- accel/accel.sh@21 -- # val=dualcast 00:10:49.073 16:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.073 16:49:37 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # IFS=: 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # read -r var val 00:10:49.073 16:49:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:49.073 16:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # IFS=: 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # read -r var val 00:10:49.073 16:49:37 -- accel/accel.sh@21 -- # val= 00:10:49.073 16:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # IFS=: 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # read -r var val 00:10:49.073 16:49:37 -- accel/accel.sh@21 -- # val=software 00:10:49.073 16:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.073 16:49:37 -- accel/accel.sh@23 -- # accel_module=software 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # IFS=: 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # read -r var val 00:10:49.073 16:49:37 -- accel/accel.sh@21 -- # val=32 00:10:49.073 16:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # IFS=: 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # read -r var val 00:10:49.073 16:49:37 -- accel/accel.sh@21 -- # val=32 00:10:49.073 16:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # IFS=: 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # read -r var val 00:10:49.073 16:49:37 -- accel/accel.sh@21 -- # val=1 00:10:49.073 16:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # IFS=: 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # read -r var val 00:10:49.073 16:49:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:49.073 16:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # IFS=: 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # read -r var val 00:10:49.073 16:49:37 -- accel/accel.sh@21 -- # val=Yes 00:10:49.073 16:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # IFS=: 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # read -r var val 00:10:49.073 16:49:37 -- accel/accel.sh@21 -- # val= 00:10:49.073 16:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # IFS=: 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # read -r var val 00:10:49.073 16:49:37 -- accel/accel.sh@21 -- # val= 00:10:49.073 16:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # IFS=: 00:10:49.073 16:49:37 -- accel/accel.sh@20 -- # read -r var val 00:10:50.976 16:49:39 -- accel/accel.sh@21 -- # val= 00:10:50.976 16:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.976 16:49:39 -- accel/accel.sh@20 -- # IFS=: 00:10:50.976 16:49:39 -- accel/accel.sh@20 -- # read -r var val 00:10:50.976 16:49:39 -- accel/accel.sh@21 -- # val= 00:10:50.976 16:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.976 16:49:39 -- accel/accel.sh@20 -- # IFS=: 00:10:50.976 16:49:39 -- accel/accel.sh@20 -- # read -r var val 00:10:50.976 16:49:39 -- accel/accel.sh@21 -- # val= 00:10:50.976 16:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.976 16:49:39 -- accel/accel.sh@20 -- # IFS=: 00:10:50.976 16:49:39 -- accel/accel.sh@20 -- # read -r var val 00:10:50.976 16:49:39 -- accel/accel.sh@21 -- # val= 00:10:50.976 16:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.976 16:49:39 -- accel/accel.sh@20 -- # IFS=: 00:10:50.976 16:49:39 -- accel/accel.sh@20 -- # read -r var val 00:10:50.976 16:49:39 -- accel/accel.sh@21 -- # val= 00:10:50.976 16:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.976 16:49:39 -- accel/accel.sh@20 -- # IFS=: 00:10:50.976 16:49:39 -- accel/accel.sh@20 -- # read -r var val 00:10:50.976 16:49:39 -- accel/accel.sh@21 -- # val= 00:10:50.976 16:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.976 16:49:39 -- accel/accel.sh@20 -- # IFS=: 00:10:50.976 16:49:39 -- accel/accel.sh@20 -- # read -r var val 00:10:50.976 ************************************ 00:10:50.976 END TEST accel_dualcast 00:10:50.976 ************************************ 00:10:50.976 16:49:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:50.976 16:49:39 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:10:50.976 16:49:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:50.976 00:10:50.976 real 0m4.589s 00:10:50.976 user 0m4.035s 00:10:50.976 sys 0m0.358s 00:10:50.976 16:49:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:50.976 16:49:39 -- common/autotest_common.sh@10 -- # set +x 00:10:50.976 16:49:39 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:50.976 16:49:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:50.976 16:49:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:50.976 16:49:39 -- common/autotest_common.sh@10 -- # set +x 00:10:50.976 ************************************ 00:10:50.976 START TEST accel_compare 00:10:50.976 ************************************ 00:10:50.976 16:49:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:10:50.976 16:49:39 -- accel/accel.sh@16 -- # local accel_opc 00:10:50.976 16:49:39 -- accel/accel.sh@17 -- # local accel_module 00:10:50.976 16:49:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:10:50.976 16:49:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:50.976 16:49:39 -- accel/accel.sh@12 -- # build_accel_config 00:10:50.976 16:49:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:50.976 16:49:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:50.976 16:49:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:50.976 16:49:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:50.976 16:49:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:50.976 16:49:39 -- accel/accel.sh@41 -- # local IFS=, 00:10:50.976 16:49:39 -- accel/accel.sh@42 -- # jq -r . 00:10:50.976 [2024-11-05 16:49:39.650552] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:50.976 [2024-11-05 16:49:39.650906] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106941 ] 00:10:50.976 [2024-11-05 16:49:39.818306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.234 [2024-11-05 16:49:39.976616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.137 16:49:41 -- accel/accel.sh@18 -- # out=' 00:10:53.137 SPDK Configuration: 00:10:53.137 Core mask: 0x1 00:10:53.137 00:10:53.137 Accel Perf Configuration: 00:10:53.137 Workload Type: compare 00:10:53.137 Transfer size: 4096 bytes 00:10:53.137 Vector count 1 00:10:53.137 Module: software 00:10:53.137 Queue depth: 32 00:10:53.137 Allocate depth: 32 00:10:53.137 # threads/core: 1 00:10:53.137 Run time: 1 seconds 00:10:53.137 Verify: Yes 00:10:53.137 00:10:53.137 Running for 1 seconds... 00:10:53.137 00:10:53.137 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:53.137 ------------------------------------------------------------------------------------ 00:10:53.137 0,0 462752/s 1807 MiB/s 0 0 00:10:53.137 ==================================================================================== 00:10:53.137 Total 462752/s 1807 MiB/s 0 0' 00:10:53.137 16:49:41 -- accel/accel.sh@20 -- # IFS=: 00:10:53.137 16:49:41 -- accel/accel.sh@20 -- # read -r var val 00:10:53.137 16:49:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:53.137 16:49:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:53.137 16:49:41 -- accel/accel.sh@12 -- # build_accel_config 00:10:53.137 16:49:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:53.137 16:49:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:53.137 16:49:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:53.137 16:49:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:53.137 16:49:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:53.137 16:49:41 -- accel/accel.sh@41 -- # local IFS=, 00:10:53.137 16:49:41 -- accel/accel.sh@42 -- # jq -r . 00:10:53.137 [2024-11-05 16:49:41.927902] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:53.137 [2024-11-05 16:49:41.928298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106976 ] 00:10:53.396 [2024-11-05 16:49:42.093978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.396 [2024-11-05 16:49:42.261333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.654 16:49:42 -- accel/accel.sh@21 -- # val= 00:10:53.654 16:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.654 16:49:42 -- accel/accel.sh@20 -- # IFS=: 00:10:53.654 16:49:42 -- accel/accel.sh@20 -- # read -r var val 00:10:53.654 16:49:42 -- accel/accel.sh@21 -- # val= 00:10:53.654 16:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.654 16:49:42 -- accel/accel.sh@20 -- # IFS=: 00:10:53.654 16:49:42 -- accel/accel.sh@20 -- # read -r var val 00:10:53.654 16:49:42 -- accel/accel.sh@21 -- # val=0x1 00:10:53.654 16:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.654 16:49:42 -- accel/accel.sh@20 -- # IFS=: 00:10:53.654 16:49:42 -- accel/accel.sh@20 -- # read -r var val 00:10:53.654 16:49:42 -- accel/accel.sh@21 -- # val= 00:10:53.654 16:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.654 16:49:42 -- accel/accel.sh@20 -- # IFS=: 00:10:53.654 16:49:42 -- accel/accel.sh@20 -- # read -r var val 00:10:53.654 16:49:42 -- accel/accel.sh@21 -- # val= 00:10:53.654 16:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.654 16:49:42 -- accel/accel.sh@20 -- # IFS=: 00:10:53.654 16:49:42 -- accel/accel.sh@20 -- # read -r var val 00:10:53.654 16:49:42 -- accel/accel.sh@21 -- # val=compare 00:10:53.654 16:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.654 16:49:42 -- accel/accel.sh@24 -- # accel_opc=compare 00:10:53.654 16:49:42 -- accel/accel.sh@20 -- # IFS=: 00:10:53.654 16:49:42 -- accel/accel.sh@20 -- # read -r var val 00:10:53.654 16:49:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:53.654 16:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # IFS=: 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # read -r var val 00:10:53.655 16:49:42 -- accel/accel.sh@21 -- # val= 00:10:53.655 16:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # IFS=: 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # read -r var val 00:10:53.655 16:49:42 -- accel/accel.sh@21 -- # val=software 00:10:53.655 16:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.655 16:49:42 -- accel/accel.sh@23 -- # accel_module=software 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # IFS=: 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # read -r var val 00:10:53.655 16:49:42 -- accel/accel.sh@21 -- # val=32 00:10:53.655 16:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # IFS=: 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # read -r var val 00:10:53.655 16:49:42 -- accel/accel.sh@21 -- # val=32 00:10:53.655 16:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # IFS=: 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # read -r var val 00:10:53.655 16:49:42 -- accel/accel.sh@21 -- # val=1 00:10:53.655 16:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # IFS=: 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # read -r var val 00:10:53.655 16:49:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:53.655 16:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # IFS=: 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # read -r var val 00:10:53.655 16:49:42 -- accel/accel.sh@21 -- # val=Yes 00:10:53.655 16:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # IFS=: 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # read -r var val 00:10:53.655 16:49:42 -- accel/accel.sh@21 -- # val= 00:10:53.655 16:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # IFS=: 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # read -r var val 00:10:53.655 16:49:42 -- accel/accel.sh@21 -- # val= 00:10:53.655 16:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # IFS=: 00:10:53.655 16:49:42 -- accel/accel.sh@20 -- # read -r var val 00:10:55.558 16:49:44 -- accel/accel.sh@21 -- # val= 00:10:55.558 16:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.558 16:49:44 -- accel/accel.sh@20 -- # IFS=: 00:10:55.558 16:49:44 -- accel/accel.sh@20 -- # read -r var val 00:10:55.558 16:49:44 -- accel/accel.sh@21 -- # val= 00:10:55.558 16:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.558 16:49:44 -- accel/accel.sh@20 -- # IFS=: 00:10:55.558 16:49:44 -- accel/accel.sh@20 -- # read -r var val 00:10:55.558 16:49:44 -- accel/accel.sh@21 -- # val= 00:10:55.558 16:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.558 16:49:44 -- accel/accel.sh@20 -- # IFS=: 00:10:55.558 16:49:44 -- accel/accel.sh@20 -- # read -r var val 00:10:55.558 16:49:44 -- accel/accel.sh@21 -- # val= 00:10:55.558 16:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.558 16:49:44 -- accel/accel.sh@20 -- # IFS=: 00:10:55.558 16:49:44 -- accel/accel.sh@20 -- # read -r var val 00:10:55.558 16:49:44 -- accel/accel.sh@21 -- # val= 00:10:55.558 16:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.558 16:49:44 -- accel/accel.sh@20 -- # IFS=: 00:10:55.558 16:49:44 -- accel/accel.sh@20 -- # read -r var val 00:10:55.558 16:49:44 -- accel/accel.sh@21 -- # val= 00:10:55.558 16:49:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.558 16:49:44 -- accel/accel.sh@20 -- # IFS=: 00:10:55.558 16:49:44 -- accel/accel.sh@20 -- # read -r var val 00:10:55.558 ************************************ 00:10:55.558 END TEST accel_compare 00:10:55.558 ************************************ 00:10:55.558 16:49:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:55.558 16:49:44 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:10:55.558 16:49:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:55.558 00:10:55.558 real 0m4.573s 00:10:55.558 user 0m4.013s 00:10:55.558 sys 0m0.369s 00:10:55.558 16:49:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:55.558 16:49:44 -- common/autotest_common.sh@10 -- # set +x 00:10:55.558 16:49:44 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:55.558 16:49:44 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:55.558 16:49:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:55.558 16:49:44 -- common/autotest_common.sh@10 -- # set +x 00:10:55.558 ************************************ 00:10:55.558 START TEST accel_xor 00:10:55.558 ************************************ 00:10:55.558 16:49:44 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:10:55.558 16:49:44 -- accel/accel.sh@16 -- # local accel_opc 00:10:55.558 16:49:44 -- accel/accel.sh@17 -- # local accel_module 00:10:55.558 16:49:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:10:55.558 16:49:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:55.558 16:49:44 -- accel/accel.sh@12 -- # build_accel_config 00:10:55.558 16:49:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:55.558 16:49:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:55.558 16:49:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:55.558 16:49:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:55.558 16:49:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:55.558 16:49:44 -- accel/accel.sh@41 -- # local IFS=, 00:10:55.558 16:49:44 -- accel/accel.sh@42 -- # jq -r . 00:10:55.558 [2024-11-05 16:49:44.277587] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:55.558 [2024-11-05 16:49:44.277909] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107021 ] 00:10:55.558 [2024-11-05 16:49:44.446192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.817 [2024-11-05 16:49:44.610457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.721 16:49:46 -- accel/accel.sh@18 -- # out=' 00:10:57.721 SPDK Configuration: 00:10:57.721 Core mask: 0x1 00:10:57.721 00:10:57.721 Accel Perf Configuration: 00:10:57.721 Workload Type: xor 00:10:57.721 Source buffers: 2 00:10:57.721 Transfer size: 4096 bytes 00:10:57.721 Vector count 1 00:10:57.721 Module: software 00:10:57.721 Queue depth: 32 00:10:57.721 Allocate depth: 32 00:10:57.721 # threads/core: 1 00:10:57.721 Run time: 1 seconds 00:10:57.721 Verify: Yes 00:10:57.721 00:10:57.721 Running for 1 seconds... 00:10:57.721 00:10:57.721 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:57.721 ------------------------------------------------------------------------------------ 00:10:57.721 0,0 257728/s 1006 MiB/s 0 0 00:10:57.721 ==================================================================================== 00:10:57.721 Total 257728/s 1006 MiB/s 0 0' 00:10:57.721 16:49:46 -- accel/accel.sh@20 -- # IFS=: 00:10:57.721 16:49:46 -- accel/accel.sh@20 -- # read -r var val 00:10:57.721 16:49:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:57.721 16:49:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:57.721 16:49:46 -- accel/accel.sh@12 -- # build_accel_config 00:10:57.721 16:49:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:57.721 16:49:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:57.721 16:49:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:57.721 16:49:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:57.721 16:49:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:57.721 16:49:46 -- accel/accel.sh@41 -- # local IFS=, 00:10:57.721 16:49:46 -- accel/accel.sh@42 -- # jq -r . 00:10:57.721 [2024-11-05 16:49:46.542238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:57.721 [2024-11-05 16:49:46.542591] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107063 ] 00:10:57.979 [2024-11-05 16:49:46.709825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.238 [2024-11-05 16:49:46.883653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.238 16:49:47 -- accel/accel.sh@21 -- # val= 00:10:58.238 16:49:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.238 16:49:47 -- accel/accel.sh@20 -- # IFS=: 00:10:58.238 16:49:47 -- accel/accel.sh@20 -- # read -r var val 00:10:58.238 16:49:47 -- accel/accel.sh@21 -- # val= 00:10:58.238 16:49:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.238 16:49:47 -- accel/accel.sh@20 -- # IFS=: 00:10:58.238 16:49:47 -- accel/accel.sh@20 -- # read -r var val 00:10:58.238 16:49:47 -- accel/accel.sh@21 -- # val=0x1 00:10:58.238 16:49:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.238 16:49:47 -- accel/accel.sh@20 -- # IFS=: 00:10:58.238 16:49:47 -- accel/accel.sh@20 -- # read -r var val 00:10:58.238 16:49:47 -- accel/accel.sh@21 -- # val= 00:10:58.238 16:49:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.238 16:49:47 -- accel/accel.sh@20 -- # IFS=: 00:10:58.238 16:49:47 -- accel/accel.sh@20 -- # read -r var val 00:10:58.239 16:49:47 -- accel/accel.sh@21 -- # val= 00:10:58.239 16:49:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # IFS=: 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # read -r var val 00:10:58.239 16:49:47 -- accel/accel.sh@21 -- # val=xor 00:10:58.239 16:49:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.239 16:49:47 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # IFS=: 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # read -r var val 00:10:58.239 16:49:47 -- accel/accel.sh@21 -- # val=2 00:10:58.239 16:49:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # IFS=: 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # read -r var val 00:10:58.239 16:49:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:58.239 16:49:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # IFS=: 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # read -r var val 00:10:58.239 16:49:47 -- accel/accel.sh@21 -- # val= 00:10:58.239 16:49:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # IFS=: 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # read -r var val 00:10:58.239 16:49:47 -- accel/accel.sh@21 -- # val=software 00:10:58.239 16:49:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.239 16:49:47 -- accel/accel.sh@23 -- # accel_module=software 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # IFS=: 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # read -r var val 00:10:58.239 16:49:47 -- accel/accel.sh@21 -- # val=32 00:10:58.239 16:49:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # IFS=: 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # read -r var val 00:10:58.239 16:49:47 -- accel/accel.sh@21 -- # val=32 00:10:58.239 16:49:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # IFS=: 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # read -r var val 00:10:58.239 16:49:47 -- accel/accel.sh@21 -- # val=1 00:10:58.239 16:49:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # IFS=: 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # read -r var val 00:10:58.239 16:49:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:58.239 16:49:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # IFS=: 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # read -r var val 00:10:58.239 16:49:47 -- accel/accel.sh@21 -- # val=Yes 00:10:58.239 16:49:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # IFS=: 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # read -r var val 00:10:58.239 16:49:47 -- accel/accel.sh@21 -- # val= 00:10:58.239 16:49:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # IFS=: 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # read -r var val 00:10:58.239 16:49:47 -- accel/accel.sh@21 -- # val= 00:10:58.239 16:49:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # IFS=: 00:10:58.239 16:49:47 -- accel/accel.sh@20 -- # read -r var val 00:11:00.180 16:49:48 -- accel/accel.sh@21 -- # val= 00:11:00.180 16:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.180 16:49:48 -- accel/accel.sh@20 -- # IFS=: 00:11:00.180 16:49:48 -- accel/accel.sh@20 -- # read -r var val 00:11:00.180 16:49:48 -- accel/accel.sh@21 -- # val= 00:11:00.180 16:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.180 16:49:48 -- accel/accel.sh@20 -- # IFS=: 00:11:00.180 16:49:48 -- accel/accel.sh@20 -- # read -r var val 00:11:00.180 16:49:48 -- accel/accel.sh@21 -- # val= 00:11:00.180 16:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.180 16:49:48 -- accel/accel.sh@20 -- # IFS=: 00:11:00.180 16:49:48 -- accel/accel.sh@20 -- # read -r var val 00:11:00.180 16:49:48 -- accel/accel.sh@21 -- # val= 00:11:00.180 16:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.180 16:49:48 -- accel/accel.sh@20 -- # IFS=: 00:11:00.180 16:49:48 -- accel/accel.sh@20 -- # read -r var val 00:11:00.180 16:49:48 -- accel/accel.sh@21 -- # val= 00:11:00.180 16:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.180 16:49:48 -- accel/accel.sh@20 -- # IFS=: 00:11:00.180 16:49:48 -- accel/accel.sh@20 -- # read -r var val 00:11:00.180 16:49:48 -- accel/accel.sh@21 -- # val= 00:11:00.180 16:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.180 16:49:48 -- accel/accel.sh@20 -- # IFS=: 00:11:00.180 16:49:48 -- accel/accel.sh@20 -- # read -r var val 00:11:00.180 ************************************ 00:11:00.180 END TEST accel_xor 00:11:00.180 ************************************ 00:11:00.180 16:49:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:00.180 16:49:48 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:00.180 16:49:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:00.180 00:11:00.180 real 0m4.589s 00:11:00.180 user 0m3.983s 00:11:00.180 sys 0m0.416s 00:11:00.180 16:49:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:00.180 16:49:48 -- common/autotest_common.sh@10 -- # set +x 00:11:00.180 16:49:48 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:11:00.180 16:49:48 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:11:00.180 16:49:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:00.180 16:49:48 -- common/autotest_common.sh@10 -- # set +x 00:11:00.180 ************************************ 00:11:00.180 START TEST accel_xor 00:11:00.180 ************************************ 00:11:00.180 16:49:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:11:00.180 16:49:48 -- accel/accel.sh@16 -- # local accel_opc 00:11:00.180 16:49:48 -- accel/accel.sh@17 -- # local accel_module 00:11:00.180 16:49:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:11:00.180 16:49:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:00.180 16:49:48 -- accel/accel.sh@12 -- # build_accel_config 00:11:00.180 16:49:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:00.180 16:49:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:00.180 16:49:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:00.180 16:49:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:00.180 16:49:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:00.180 16:49:48 -- accel/accel.sh@41 -- # local IFS=, 00:11:00.180 16:49:48 -- accel/accel.sh@42 -- # jq -r . 00:11:00.180 [2024-11-05 16:49:48.914380] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:00.180 [2024-11-05 16:49:48.915338] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107110 ] 00:11:00.439 [2024-11-05 16:49:49.082946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.439 [2024-11-05 16:49:49.256099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.441 16:49:51 -- accel/accel.sh@18 -- # out=' 00:11:02.441 SPDK Configuration: 00:11:02.441 Core mask: 0x1 00:11:02.441 00:11:02.441 Accel Perf Configuration: 00:11:02.441 Workload Type: xor 00:11:02.441 Source buffers: 3 00:11:02.441 Transfer size: 4096 bytes 00:11:02.441 Vector count 1 00:11:02.441 Module: software 00:11:02.441 Queue depth: 32 00:11:02.441 Allocate depth: 32 00:11:02.441 # threads/core: 1 00:11:02.441 Run time: 1 seconds 00:11:02.441 Verify: Yes 00:11:02.441 00:11:02.441 Running for 1 seconds... 00:11:02.441 00:11:02.441 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:02.441 ------------------------------------------------------------------------------------ 00:11:02.441 0,0 230560/s 900 MiB/s 0 0 00:11:02.441 ==================================================================================== 00:11:02.441 Total 230560/s 900 MiB/s 0 0' 00:11:02.441 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.441 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:02.441 16:49:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:11:02.441 16:49:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:02.441 16:49:51 -- accel/accel.sh@12 -- # build_accel_config 00:11:02.441 16:49:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:02.441 16:49:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:02.441 16:49:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:02.441 16:49:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:02.441 16:49:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:02.441 16:49:51 -- accel/accel.sh@41 -- # local IFS=, 00:11:02.441 16:49:51 -- accel/accel.sh@42 -- # jq -r . 00:11:02.441 [2024-11-05 16:49:51.222916] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:02.441 [2024-11-05 16:49:51.223305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107143 ] 00:11:02.718 [2024-11-05 16:49:51.393842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.718 [2024-11-05 16:49:51.593159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.976 16:49:51 -- accel/accel.sh@21 -- # val= 00:11:02.976 16:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.976 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.976 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:02.976 16:49:51 -- accel/accel.sh@21 -- # val= 00:11:02.976 16:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.976 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.976 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:02.976 16:49:51 -- accel/accel.sh@21 -- # val=0x1 00:11:02.976 16:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.976 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.976 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:02.976 16:49:51 -- accel/accel.sh@21 -- # val= 00:11:02.976 16:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.976 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.976 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:02.977 16:49:51 -- accel/accel.sh@21 -- # val= 00:11:02.977 16:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:02.977 16:49:51 -- accel/accel.sh@21 -- # val=xor 00:11:02.977 16:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.977 16:49:51 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:02.977 16:49:51 -- accel/accel.sh@21 -- # val=3 00:11:02.977 16:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:02.977 16:49:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:02.977 16:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:02.977 16:49:51 -- accel/accel.sh@21 -- # val= 00:11:02.977 16:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:02.977 16:49:51 -- accel/accel.sh@21 -- # val=software 00:11:02.977 16:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.977 16:49:51 -- accel/accel.sh@23 -- # accel_module=software 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:02.977 16:49:51 -- accel/accel.sh@21 -- # val=32 00:11:02.977 16:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:02.977 16:49:51 -- accel/accel.sh@21 -- # val=32 00:11:02.977 16:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:02.977 16:49:51 -- accel/accel.sh@21 -- # val=1 00:11:02.977 16:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:02.977 16:49:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:02.977 16:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:02.977 16:49:51 -- accel/accel.sh@21 -- # val=Yes 00:11:02.977 16:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:02.977 16:49:51 -- accel/accel.sh@21 -- # val= 00:11:02.977 16:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:02.977 16:49:51 -- accel/accel.sh@21 -- # val= 00:11:02.977 16:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # IFS=: 00:11:02.977 16:49:51 -- accel/accel.sh@20 -- # read -r var val 00:11:04.881 16:49:53 -- accel/accel.sh@21 -- # val= 00:11:04.881 16:49:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.881 16:49:53 -- accel/accel.sh@20 -- # IFS=: 00:11:04.881 16:49:53 -- accel/accel.sh@20 -- # read -r var val 00:11:04.881 16:49:53 -- accel/accel.sh@21 -- # val= 00:11:04.881 16:49:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.881 16:49:53 -- accel/accel.sh@20 -- # IFS=: 00:11:04.881 16:49:53 -- accel/accel.sh@20 -- # read -r var val 00:11:04.881 16:49:53 -- accel/accel.sh@21 -- # val= 00:11:04.881 16:49:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.881 16:49:53 -- accel/accel.sh@20 -- # IFS=: 00:11:04.881 16:49:53 -- accel/accel.sh@20 -- # read -r var val 00:11:04.881 16:49:53 -- accel/accel.sh@21 -- # val= 00:11:04.881 16:49:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.881 16:49:53 -- accel/accel.sh@20 -- # IFS=: 00:11:04.881 16:49:53 -- accel/accel.sh@20 -- # read -r var val 00:11:04.881 16:49:53 -- accel/accel.sh@21 -- # val= 00:11:04.881 16:49:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.881 16:49:53 -- accel/accel.sh@20 -- # IFS=: 00:11:04.881 16:49:53 -- accel/accel.sh@20 -- # read -r var val 00:11:04.881 16:49:53 -- accel/accel.sh@21 -- # val= 00:11:04.881 16:49:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.881 16:49:53 -- accel/accel.sh@20 -- # IFS=: 00:11:04.881 16:49:53 -- accel/accel.sh@20 -- # read -r var val 00:11:04.881 16:49:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:04.881 16:49:53 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:04.881 ************************************ 00:11:04.881 END TEST accel_xor 00:11:04.881 ************************************ 00:11:04.881 16:49:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:04.881 00:11:04.881 real 0m4.650s 00:11:04.881 user 0m4.067s 00:11:04.881 sys 0m0.396s 00:11:04.881 16:49:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:04.881 16:49:53 -- common/autotest_common.sh@10 -- # set +x 00:11:04.881 16:49:53 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:11:04.881 16:49:53 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:11:04.881 16:49:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:04.881 16:49:53 -- common/autotest_common.sh@10 -- # set +x 00:11:04.881 ************************************ 00:11:04.881 START TEST accel_dif_verify 00:11:04.881 ************************************ 00:11:04.881 16:49:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:11:04.881 16:49:53 -- accel/accel.sh@16 -- # local accel_opc 00:11:04.881 16:49:53 -- accel/accel.sh@17 -- # local accel_module 00:11:04.881 16:49:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:11:04.881 16:49:53 -- accel/accel.sh@12 -- # build_accel_config 00:11:04.881 16:49:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:04.881 16:49:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:04.881 16:49:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:04.881 16:49:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:04.881 16:49:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:04.881 16:49:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:04.881 16:49:53 -- accel/accel.sh@41 -- # local IFS=, 00:11:04.881 16:49:53 -- accel/accel.sh@42 -- # jq -r . 00:11:04.881 [2024-11-05 16:49:53.616594] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:04.881 [2024-11-05 16:49:53.617084] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107201 ] 00:11:05.140 [2024-11-05 16:49:53.786725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.140 [2024-11-05 16:49:53.958492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.043 16:49:55 -- accel/accel.sh@18 -- # out=' 00:11:07.043 SPDK Configuration: 00:11:07.043 Core mask: 0x1 00:11:07.043 00:11:07.043 Accel Perf Configuration: 00:11:07.043 Workload Type: dif_verify 00:11:07.043 Vector size: 4096 bytes 00:11:07.043 Transfer size: 4096 bytes 00:11:07.043 Block size: 512 bytes 00:11:07.043 Metadata size: 8 bytes 00:11:07.043 Vector count 1 00:11:07.043 Module: software 00:11:07.043 Queue depth: 32 00:11:07.043 Allocate depth: 32 00:11:07.043 # threads/core: 1 00:11:07.043 Run time: 1 seconds 00:11:07.043 Verify: No 00:11:07.043 00:11:07.043 Running for 1 seconds... 00:11:07.043 00:11:07.043 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:07.043 ------------------------------------------------------------------------------------ 00:11:07.043 0,0 112480/s 446 MiB/s 0 0 00:11:07.043 ==================================================================================== 00:11:07.043 Total 112480/s 439 MiB/s 0 0' 00:11:07.043 16:49:55 -- accel/accel.sh@20 -- # IFS=: 00:11:07.043 16:49:55 -- accel/accel.sh@20 -- # read -r var val 00:11:07.043 16:49:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:11:07.043 16:49:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:07.043 16:49:55 -- accel/accel.sh@12 -- # build_accel_config 00:11:07.043 16:49:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:07.043 16:49:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:07.043 16:49:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:07.043 16:49:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:07.043 16:49:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:07.043 16:49:55 -- accel/accel.sh@41 -- # local IFS=, 00:11:07.043 16:49:55 -- accel/accel.sh@42 -- # jq -r . 00:11:07.043 [2024-11-05 16:49:55.896955] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:07.043 [2024-11-05 16:49:55.897292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107235 ] 00:11:07.300 [2024-11-05 16:49:56.050394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.559 [2024-11-05 16:49:56.231418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val= 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val= 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val=0x1 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val= 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val= 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val=dif_verify 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val= 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val=software 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@23 -- # accel_module=software 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val=32 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val=32 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val=1 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val=No 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val= 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:07.559 16:49:56 -- accel/accel.sh@21 -- # val= 00:11:07.559 16:49:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # IFS=: 00:11:07.559 16:49:56 -- accel/accel.sh@20 -- # read -r var val 00:11:09.456 16:49:58 -- accel/accel.sh@21 -- # val= 00:11:09.456 16:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.456 16:49:58 -- accel/accel.sh@20 -- # IFS=: 00:11:09.456 16:49:58 -- accel/accel.sh@20 -- # read -r var val 00:11:09.456 16:49:58 -- accel/accel.sh@21 -- # val= 00:11:09.456 16:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.456 16:49:58 -- accel/accel.sh@20 -- # IFS=: 00:11:09.456 16:49:58 -- accel/accel.sh@20 -- # read -r var val 00:11:09.456 16:49:58 -- accel/accel.sh@21 -- # val= 00:11:09.456 16:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.456 16:49:58 -- accel/accel.sh@20 -- # IFS=: 00:11:09.456 16:49:58 -- accel/accel.sh@20 -- # read -r var val 00:11:09.456 16:49:58 -- accel/accel.sh@21 -- # val= 00:11:09.456 16:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.456 16:49:58 -- accel/accel.sh@20 -- # IFS=: 00:11:09.456 16:49:58 -- accel/accel.sh@20 -- # read -r var val 00:11:09.456 16:49:58 -- accel/accel.sh@21 -- # val= 00:11:09.456 16:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.456 16:49:58 -- accel/accel.sh@20 -- # IFS=: 00:11:09.456 16:49:58 -- accel/accel.sh@20 -- # read -r var val 00:11:09.456 16:49:58 -- accel/accel.sh@21 -- # val= 00:11:09.456 16:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.456 16:49:58 -- accel/accel.sh@20 -- # IFS=: 00:11:09.456 16:49:58 -- accel/accel.sh@20 -- # read -r var val 00:11:09.456 16:49:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:09.456 16:49:58 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:11:09.456 16:49:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:09.456 00:11:09.456 real 0m4.607s 00:11:09.456 user 0m4.098s 00:11:09.456 sys 0m0.313s 00:11:09.456 16:49:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:09.456 ************************************ 00:11:09.456 END TEST accel_dif_verify 00:11:09.456 ************************************ 00:11:09.456 16:49:58 -- common/autotest_common.sh@10 -- # set +x 00:11:09.456 16:49:58 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:11:09.456 16:49:58 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:11:09.456 16:49:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:09.456 16:49:58 -- common/autotest_common.sh@10 -- # set +x 00:11:09.456 ************************************ 00:11:09.456 START TEST accel_dif_generate 00:11:09.456 ************************************ 00:11:09.456 16:49:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:11:09.456 16:49:58 -- accel/accel.sh@16 -- # local accel_opc 00:11:09.456 16:49:58 -- accel/accel.sh@17 -- # local accel_module 00:11:09.456 16:49:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:11:09.456 16:49:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:09.456 16:49:58 -- accel/accel.sh@12 -- # build_accel_config 00:11:09.456 16:49:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:09.456 16:49:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:09.456 16:49:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:09.456 16:49:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:09.456 16:49:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:09.456 16:49:58 -- accel/accel.sh@41 -- # local IFS=, 00:11:09.456 16:49:58 -- accel/accel.sh@42 -- # jq -r . 00:11:09.456 [2024-11-05 16:49:58.271377] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:09.456 [2024-11-05 16:49:58.271574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107280 ] 00:11:09.714 [2024-11-05 16:49:58.442027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.972 [2024-11-05 16:49:58.618039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.883 16:50:00 -- accel/accel.sh@18 -- # out=' 00:11:11.883 SPDK Configuration: 00:11:11.883 Core mask: 0x1 00:11:11.883 00:11:11.883 Accel Perf Configuration: 00:11:11.883 Workload Type: dif_generate 00:11:11.883 Vector size: 4096 bytes 00:11:11.883 Transfer size: 4096 bytes 00:11:11.883 Block size: 512 bytes 00:11:11.883 Metadata size: 8 bytes 00:11:11.883 Vector count 1 00:11:11.883 Module: software 00:11:11.883 Queue depth: 32 00:11:11.883 Allocate depth: 32 00:11:11.883 # threads/core: 1 00:11:11.883 Run time: 1 seconds 00:11:11.883 Verify: No 00:11:11.883 00:11:11.883 Running for 1 seconds... 00:11:11.883 00:11:11.883 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:11.883 ------------------------------------------------------------------------------------ 00:11:11.883 0,0 130752/s 518 MiB/s 0 0 00:11:11.883 ==================================================================================== 00:11:11.883 Total 130752/s 510 MiB/s 0 0' 00:11:11.883 16:50:00 -- accel/accel.sh@20 -- # IFS=: 00:11:11.883 16:50:00 -- accel/accel.sh@20 -- # read -r var val 00:11:11.883 16:50:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:11:11.883 16:50:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:11.883 16:50:00 -- accel/accel.sh@12 -- # build_accel_config 00:11:11.883 16:50:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:11.883 16:50:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:11.883 16:50:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:11.883 16:50:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:11.883 16:50:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:11.883 16:50:00 -- accel/accel.sh@41 -- # local IFS=, 00:11:11.883 16:50:00 -- accel/accel.sh@42 -- # jq -r . 00:11:11.883 [2024-11-05 16:50:00.618011] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:11.883 [2024-11-05 16:50:00.618206] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107318 ] 00:11:12.142 [2024-11-05 16:50:00.786358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.142 [2024-11-05 16:50:00.976474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val= 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val= 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val=0x1 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val= 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val= 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val=dif_generate 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val= 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val=software 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@23 -- # accel_module=software 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val=32 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val=32 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val=1 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val=No 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val= 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:12.400 16:50:01 -- accel/accel.sh@21 -- # val= 00:11:12.400 16:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # IFS=: 00:11:12.400 16:50:01 -- accel/accel.sh@20 -- # read -r var val 00:11:14.300 16:50:02 -- accel/accel.sh@21 -- # val= 00:11:14.300 16:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.300 16:50:02 -- accel/accel.sh@20 -- # IFS=: 00:11:14.300 16:50:02 -- accel/accel.sh@20 -- # read -r var val 00:11:14.300 16:50:02 -- accel/accel.sh@21 -- # val= 00:11:14.300 16:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.300 16:50:02 -- accel/accel.sh@20 -- # IFS=: 00:11:14.300 16:50:02 -- accel/accel.sh@20 -- # read -r var val 00:11:14.300 16:50:02 -- accel/accel.sh@21 -- # val= 00:11:14.300 16:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.300 16:50:02 -- accel/accel.sh@20 -- # IFS=: 00:11:14.300 16:50:02 -- accel/accel.sh@20 -- # read -r var val 00:11:14.300 16:50:02 -- accel/accel.sh@21 -- # val= 00:11:14.300 16:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.300 16:50:02 -- accel/accel.sh@20 -- # IFS=: 00:11:14.300 16:50:02 -- accel/accel.sh@20 -- # read -r var val 00:11:14.300 16:50:02 -- accel/accel.sh@21 -- # val= 00:11:14.300 16:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.300 16:50:02 -- accel/accel.sh@20 -- # IFS=: 00:11:14.300 16:50:02 -- accel/accel.sh@20 -- # read -r var val 00:11:14.300 16:50:02 -- accel/accel.sh@21 -- # val= 00:11:14.300 16:50:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.300 16:50:02 -- accel/accel.sh@20 -- # IFS=: 00:11:14.300 16:50:02 -- accel/accel.sh@20 -- # read -r var val 00:11:14.300 16:50:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:14.301 16:50:02 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:11:14.301 16:50:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:14.301 00:11:14.301 real 0m4.721s 00:11:14.301 user 0m4.149s 00:11:14.301 sys 0m0.391s 00:11:14.301 16:50:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:14.301 ************************************ 00:11:14.301 END TEST accel_dif_generate 00:11:14.301 ************************************ 00:11:14.301 16:50:02 -- common/autotest_common.sh@10 -- # set +x 00:11:14.301 16:50:02 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:11:14.301 16:50:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:11:14.301 16:50:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:14.301 16:50:02 -- common/autotest_common.sh@10 -- # set +x 00:11:14.301 ************************************ 00:11:14.301 START TEST accel_dif_generate_copy 00:11:14.301 ************************************ 00:11:14.301 16:50:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:11:14.301 16:50:02 -- accel/accel.sh@16 -- # local accel_opc 00:11:14.301 16:50:02 -- accel/accel.sh@17 -- # local accel_module 00:11:14.301 16:50:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:11:14.301 16:50:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:14.301 16:50:02 -- accel/accel.sh@12 -- # build_accel_config 00:11:14.301 16:50:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:14.301 16:50:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:14.301 16:50:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:14.301 16:50:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:14.301 16:50:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:14.301 16:50:02 -- accel/accel.sh@41 -- # local IFS=, 00:11:14.301 16:50:02 -- accel/accel.sh@42 -- # jq -r . 00:11:14.301 [2024-11-05 16:50:03.041639] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:14.301 [2024-11-05 16:50:03.041853] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107370 ] 00:11:14.558 [2024-11-05 16:50:03.210208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.558 [2024-11-05 16:50:03.392143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.086 16:50:05 -- accel/accel.sh@18 -- # out=' 00:11:17.086 SPDK Configuration: 00:11:17.086 Core mask: 0x1 00:11:17.086 00:11:17.087 Accel Perf Configuration: 00:11:17.087 Workload Type: dif_generate_copy 00:11:17.087 Vector size: 4096 bytes 00:11:17.087 Transfer size: 4096 bytes 00:11:17.087 Vector count 1 00:11:17.087 Module: software 00:11:17.087 Queue depth: 32 00:11:17.087 Allocate depth: 32 00:11:17.087 # threads/core: 1 00:11:17.087 Run time: 1 seconds 00:11:17.087 Verify: No 00:11:17.087 00:11:17.087 Running for 1 seconds... 00:11:17.087 00:11:17.087 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:17.087 ------------------------------------------------------------------------------------ 00:11:17.087 0,0 98048/s 388 MiB/s 0 0 00:11:17.087 ==================================================================================== 00:11:17.087 Total 98048/s 383 MiB/s 0 0' 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:17.087 16:50:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:11:17.087 16:50:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:17.087 16:50:05 -- accel/accel.sh@12 -- # build_accel_config 00:11:17.087 16:50:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:17.087 16:50:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:17.087 16:50:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:17.087 16:50:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:17.087 16:50:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:17.087 16:50:05 -- accel/accel.sh@41 -- # local IFS=, 00:11:17.087 16:50:05 -- accel/accel.sh@42 -- # jq -r . 00:11:17.087 [2024-11-05 16:50:05.397125] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:17.087 [2024-11-05 16:50:05.397346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107405 ] 00:11:17.087 [2024-11-05 16:50:05.567221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.087 [2024-11-05 16:50:05.763882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.087 16:50:05 -- accel/accel.sh@21 -- # val= 00:11:17.087 16:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:17.087 16:50:05 -- accel/accel.sh@21 -- # val= 00:11:17.087 16:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:17.087 16:50:05 -- accel/accel.sh@21 -- # val=0x1 00:11:17.087 16:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:17.087 16:50:05 -- accel/accel.sh@21 -- # val= 00:11:17.087 16:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:17.087 16:50:05 -- accel/accel.sh@21 -- # val= 00:11:17.087 16:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:17.087 16:50:05 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:11:17.087 16:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.087 16:50:05 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:17.087 16:50:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:17.087 16:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:17.087 16:50:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:17.087 16:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:17.087 16:50:05 -- accel/accel.sh@21 -- # val= 00:11:17.087 16:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:17.087 16:50:05 -- accel/accel.sh@21 -- # val=software 00:11:17.087 16:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.087 16:50:05 -- accel/accel.sh@23 -- # accel_module=software 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:17.087 16:50:05 -- accel/accel.sh@21 -- # val=32 00:11:17.087 16:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:17.087 16:50:05 -- accel/accel.sh@21 -- # val=32 00:11:17.087 16:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:17.087 16:50:05 -- accel/accel.sh@21 -- # val=1 00:11:17.087 16:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:17.087 16:50:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:17.087 16:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:17.087 16:50:05 -- accel/accel.sh@21 -- # val=No 00:11:17.087 16:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:17.087 16:50:05 -- accel/accel.sh@21 -- # val= 00:11:17.087 16:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:17.087 16:50:05 -- accel/accel.sh@21 -- # val= 00:11:17.087 16:50:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # IFS=: 00:11:17.087 16:50:05 -- accel/accel.sh@20 -- # read -r var val 00:11:18.990 16:50:07 -- accel/accel.sh@21 -- # val= 00:11:18.990 16:50:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.990 16:50:07 -- accel/accel.sh@20 -- # IFS=: 00:11:18.990 16:50:07 -- accel/accel.sh@20 -- # read -r var val 00:11:18.990 16:50:07 -- accel/accel.sh@21 -- # val= 00:11:18.991 16:50:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.991 16:50:07 -- accel/accel.sh@20 -- # IFS=: 00:11:18.991 16:50:07 -- accel/accel.sh@20 -- # read -r var val 00:11:18.991 16:50:07 -- accel/accel.sh@21 -- # val= 00:11:18.991 16:50:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.991 16:50:07 -- accel/accel.sh@20 -- # IFS=: 00:11:18.991 16:50:07 -- accel/accel.sh@20 -- # read -r var val 00:11:18.991 16:50:07 -- accel/accel.sh@21 -- # val= 00:11:18.991 16:50:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.991 16:50:07 -- accel/accel.sh@20 -- # IFS=: 00:11:18.991 16:50:07 -- accel/accel.sh@20 -- # read -r var val 00:11:18.991 16:50:07 -- accel/accel.sh@21 -- # val= 00:11:18.991 16:50:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.991 16:50:07 -- accel/accel.sh@20 -- # IFS=: 00:11:18.991 16:50:07 -- accel/accel.sh@20 -- # read -r var val 00:11:18.991 16:50:07 -- accel/accel.sh@21 -- # val= 00:11:18.991 16:50:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.991 16:50:07 -- accel/accel.sh@20 -- # IFS=: 00:11:18.991 16:50:07 -- accel/accel.sh@20 -- # read -r var val 00:11:18.991 16:50:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:18.991 16:50:07 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:11:18.991 16:50:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:18.991 00:11:18.991 real 0m4.717s 00:11:18.991 user 0m4.137s 00:11:18.991 sys 0m0.411s 00:11:18.991 16:50:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:18.991 ************************************ 00:11:18.991 END TEST accel_dif_generate_copy 00:11:18.991 ************************************ 00:11:18.991 16:50:07 -- common/autotest_common.sh@10 -- # set +x 00:11:18.991 16:50:07 -- accel/accel.sh@107 -- # [[ y == y ]] 00:11:18.991 16:50:07 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:18.991 16:50:07 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:11:18.991 16:50:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.991 16:50:07 -- common/autotest_common.sh@10 -- # set +x 00:11:18.991 ************************************ 00:11:18.991 START TEST accel_comp 00:11:18.991 ************************************ 00:11:18.991 16:50:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:18.991 16:50:07 -- accel/accel.sh@16 -- # local accel_opc 00:11:18.991 16:50:07 -- accel/accel.sh@17 -- # local accel_module 00:11:18.991 16:50:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:18.991 16:50:07 -- accel/accel.sh@12 -- # build_accel_config 00:11:18.991 16:50:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:18.991 16:50:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:18.991 16:50:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:18.991 16:50:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:18.991 16:50:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:18.991 16:50:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:18.991 16:50:07 -- accel/accel.sh@41 -- # local IFS=, 00:11:18.991 16:50:07 -- accel/accel.sh@42 -- # jq -r . 00:11:18.991 [2024-11-05 16:50:07.816100] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:18.991 [2024-11-05 16:50:07.816312] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107456 ] 00:11:19.250 [2024-11-05 16:50:07.990051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.508 [2024-11-05 16:50:08.168813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.410 16:50:10 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:21.410 00:11:21.410 SPDK Configuration: 00:11:21.410 Core mask: 0x1 00:11:21.410 00:11:21.410 Accel Perf Configuration: 00:11:21.410 Workload Type: compress 00:11:21.410 Transfer size: 4096 bytes 00:11:21.410 Vector count 1 00:11:21.410 Module: software 00:11:21.410 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:21.410 Queue depth: 32 00:11:21.410 Allocate depth: 32 00:11:21.410 # threads/core: 1 00:11:21.410 Run time: 1 seconds 00:11:21.410 Verify: No 00:11:21.410 00:11:21.410 Running for 1 seconds... 00:11:21.410 00:11:21.410 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:21.410 ------------------------------------------------------------------------------------ 00:11:21.410 0,0 52896/s 220 MiB/s 0 0 00:11:21.410 ==================================================================================== 00:11:21.410 Total 52896/s 206 MiB/s 0 0' 00:11:21.410 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.410 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.410 16:50:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:21.410 16:50:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:21.410 16:50:10 -- accel/accel.sh@12 -- # build_accel_config 00:11:21.410 16:50:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:21.410 16:50:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:21.410 16:50:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:21.410 16:50:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:21.410 16:50:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:21.410 16:50:10 -- accel/accel.sh@41 -- # local IFS=, 00:11:21.410 16:50:10 -- accel/accel.sh@42 -- # jq -r . 00:11:21.410 [2024-11-05 16:50:10.222932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:21.410 [2024-11-05 16:50:10.223387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107500 ] 00:11:21.668 [2024-11-05 16:50:10.391528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.927 [2024-11-05 16:50:10.590818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val= 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val= 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val= 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val=0x1 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val= 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val= 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val=compress 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@24 -- # accel_opc=compress 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val= 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val=software 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@23 -- # accel_module=software 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val=32 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val=32 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val=1 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val=No 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val= 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:21.927 16:50:10 -- accel/accel.sh@21 -- # val= 00:11:21.927 16:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # IFS=: 00:11:21.927 16:50:10 -- accel/accel.sh@20 -- # read -r var val 00:11:23.826 16:50:12 -- accel/accel.sh@21 -- # val= 00:11:23.826 16:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.826 16:50:12 -- accel/accel.sh@20 -- # IFS=: 00:11:23.826 16:50:12 -- accel/accel.sh@20 -- # read -r var val 00:11:23.826 16:50:12 -- accel/accel.sh@21 -- # val= 00:11:23.826 16:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.826 16:50:12 -- accel/accel.sh@20 -- # IFS=: 00:11:23.826 16:50:12 -- accel/accel.sh@20 -- # read -r var val 00:11:23.826 16:50:12 -- accel/accel.sh@21 -- # val= 00:11:23.827 16:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.827 16:50:12 -- accel/accel.sh@20 -- # IFS=: 00:11:23.827 16:50:12 -- accel/accel.sh@20 -- # read -r var val 00:11:23.827 16:50:12 -- accel/accel.sh@21 -- # val= 00:11:23.827 16:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.827 16:50:12 -- accel/accel.sh@20 -- # IFS=: 00:11:23.827 16:50:12 -- accel/accel.sh@20 -- # read -r var val 00:11:23.827 16:50:12 -- accel/accel.sh@21 -- # val= 00:11:23.827 16:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.827 16:50:12 -- accel/accel.sh@20 -- # IFS=: 00:11:23.827 16:50:12 -- accel/accel.sh@20 -- # read -r var val 00:11:23.827 16:50:12 -- accel/accel.sh@21 -- # val= 00:11:23.827 16:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.827 16:50:12 -- accel/accel.sh@20 -- # IFS=: 00:11:23.827 16:50:12 -- accel/accel.sh@20 -- # read -r var val 00:11:23.827 ************************************ 00:11:23.827 END TEST accel_comp 00:11:23.827 ************************************ 00:11:23.827 16:50:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:23.827 16:50:12 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:11:23.827 16:50:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:23.827 00:11:23.827 real 0m4.810s 00:11:23.827 user 0m4.218s 00:11:23.827 sys 0m0.404s 00:11:23.827 16:50:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:23.827 16:50:12 -- common/autotest_common.sh@10 -- # set +x 00:11:23.827 16:50:12 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:23.827 16:50:12 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:11:23.827 16:50:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.827 16:50:12 -- common/autotest_common.sh@10 -- # set +x 00:11:23.827 ************************************ 00:11:23.827 START TEST accel_decomp 00:11:23.827 ************************************ 00:11:23.827 16:50:12 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:23.827 16:50:12 -- accel/accel.sh@16 -- # local accel_opc 00:11:23.827 16:50:12 -- accel/accel.sh@17 -- # local accel_module 00:11:23.827 16:50:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:23.827 16:50:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:23.827 16:50:12 -- accel/accel.sh@12 -- # build_accel_config 00:11:23.827 16:50:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:23.827 16:50:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:23.827 16:50:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:23.827 16:50:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:23.827 16:50:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:23.827 16:50:12 -- accel/accel.sh@41 -- # local IFS=, 00:11:23.827 16:50:12 -- accel/accel.sh@42 -- # jq -r . 00:11:23.827 [2024-11-05 16:50:12.663878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:23.827 [2024-11-05 16:50:12.664186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107545 ] 00:11:24.085 [2024-11-05 16:50:12.818626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.343 [2024-11-05 16:50:13.004033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.243 16:50:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:26.243 00:11:26.243 SPDK Configuration: 00:11:26.243 Core mask: 0x1 00:11:26.243 00:11:26.243 Accel Perf Configuration: 00:11:26.243 Workload Type: decompress 00:11:26.243 Transfer size: 4096 bytes 00:11:26.243 Vector count 1 00:11:26.243 Module: software 00:11:26.243 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:26.243 Queue depth: 32 00:11:26.243 Allocate depth: 32 00:11:26.243 # threads/core: 1 00:11:26.243 Run time: 1 seconds 00:11:26.243 Verify: Yes 00:11:26.243 00:11:26.243 Running for 1 seconds... 00:11:26.243 00:11:26.243 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:26.243 ------------------------------------------------------------------------------------ 00:11:26.243 0,0 65184/s 120 MiB/s 0 0 00:11:26.243 ==================================================================================== 00:11:26.243 Total 65184/s 254 MiB/s 0 0' 00:11:26.243 16:50:14 -- accel/accel.sh@20 -- # IFS=: 00:11:26.243 16:50:14 -- accel/accel.sh@20 -- # read -r var val 00:11:26.243 16:50:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:26.243 16:50:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:26.243 16:50:14 -- accel/accel.sh@12 -- # build_accel_config 00:11:26.243 16:50:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:26.243 16:50:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:26.243 16:50:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:26.243 16:50:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:26.243 16:50:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:26.243 16:50:14 -- accel/accel.sh@41 -- # local IFS=, 00:11:26.243 16:50:14 -- accel/accel.sh@42 -- # jq -r . 00:11:26.243 [2024-11-05 16:50:15.027493] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:26.243 [2024-11-05 16:50:15.027899] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107587 ] 00:11:26.501 [2024-11-05 16:50:15.198666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.760 [2024-11-05 16:50:15.392919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.760 16:50:15 -- accel/accel.sh@21 -- # val= 00:11:26.760 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.760 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.760 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:26.760 16:50:15 -- accel/accel.sh@21 -- # val= 00:11:26.760 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.760 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:26.761 16:50:15 -- accel/accel.sh@21 -- # val= 00:11:26.761 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:26.761 16:50:15 -- accel/accel.sh@21 -- # val=0x1 00:11:26.761 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:26.761 16:50:15 -- accel/accel.sh@21 -- # val= 00:11:26.761 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:26.761 16:50:15 -- accel/accel.sh@21 -- # val= 00:11:26.761 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:26.761 16:50:15 -- accel/accel.sh@21 -- # val=decompress 00:11:26.761 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.761 16:50:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:26.761 16:50:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:26.761 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:26.761 16:50:15 -- accel/accel.sh@21 -- # val= 00:11:26.761 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:26.761 16:50:15 -- accel/accel.sh@21 -- # val=software 00:11:26.761 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.761 16:50:15 -- accel/accel.sh@23 -- # accel_module=software 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:26.761 16:50:15 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:26.761 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:26.761 16:50:15 -- accel/accel.sh@21 -- # val=32 00:11:26.761 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:26.761 16:50:15 -- accel/accel.sh@21 -- # val=32 00:11:26.761 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:26.761 16:50:15 -- accel/accel.sh@21 -- # val=1 00:11:26.761 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:26.761 16:50:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:26.761 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:26.761 16:50:15 -- accel/accel.sh@21 -- # val=Yes 00:11:26.761 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:26.761 16:50:15 -- accel/accel.sh@21 -- # val= 00:11:26.761 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:26.761 16:50:15 -- accel/accel.sh@21 -- # val= 00:11:26.761 16:50:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # IFS=: 00:11:26.761 16:50:15 -- accel/accel.sh@20 -- # read -r var val 00:11:28.661 16:50:17 -- accel/accel.sh@21 -- # val= 00:11:28.661 16:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.661 16:50:17 -- accel/accel.sh@20 -- # IFS=: 00:11:28.661 16:50:17 -- accel/accel.sh@20 -- # read -r var val 00:11:28.661 16:50:17 -- accel/accel.sh@21 -- # val= 00:11:28.661 16:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.661 16:50:17 -- accel/accel.sh@20 -- # IFS=: 00:11:28.661 16:50:17 -- accel/accel.sh@20 -- # read -r var val 00:11:28.661 16:50:17 -- accel/accel.sh@21 -- # val= 00:11:28.661 16:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.661 16:50:17 -- accel/accel.sh@20 -- # IFS=: 00:11:28.661 16:50:17 -- accel/accel.sh@20 -- # read -r var val 00:11:28.661 16:50:17 -- accel/accel.sh@21 -- # val= 00:11:28.661 16:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.661 16:50:17 -- accel/accel.sh@20 -- # IFS=: 00:11:28.661 16:50:17 -- accel/accel.sh@20 -- # read -r var val 00:11:28.661 16:50:17 -- accel/accel.sh@21 -- # val= 00:11:28.661 16:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.661 16:50:17 -- accel/accel.sh@20 -- # IFS=: 00:11:28.661 16:50:17 -- accel/accel.sh@20 -- # read -r var val 00:11:28.661 16:50:17 -- accel/accel.sh@21 -- # val= 00:11:28.661 16:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.661 16:50:17 -- accel/accel.sh@20 -- # IFS=: 00:11:28.661 16:50:17 -- accel/accel.sh@20 -- # read -r var val 00:11:28.661 16:50:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:28.661 16:50:17 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:28.661 16:50:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:28.661 00:11:28.661 real 0m4.771s 00:11:28.661 user 0m4.207s 00:11:28.661 sys 0m0.384s 00:11:28.661 16:50:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:28.661 ************************************ 00:11:28.662 END TEST accel_decomp 00:11:28.662 ************************************ 00:11:28.662 16:50:17 -- common/autotest_common.sh@10 -- # set +x 00:11:28.662 16:50:17 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:28.662 16:50:17 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:28.662 16:50:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:28.662 16:50:17 -- common/autotest_common.sh@10 -- # set +x 00:11:28.662 ************************************ 00:11:28.662 START TEST accel_decmop_full 00:11:28.662 ************************************ 00:11:28.662 16:50:17 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:28.662 16:50:17 -- accel/accel.sh@16 -- # local accel_opc 00:11:28.662 16:50:17 -- accel/accel.sh@17 -- # local accel_module 00:11:28.662 16:50:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:28.662 16:50:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:28.662 16:50:17 -- accel/accel.sh@12 -- # build_accel_config 00:11:28.662 16:50:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:28.662 16:50:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:28.662 16:50:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:28.662 16:50:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:28.662 16:50:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:28.662 16:50:17 -- accel/accel.sh@41 -- # local IFS=, 00:11:28.662 16:50:17 -- accel/accel.sh@42 -- # jq -r . 00:11:28.662 [2024-11-05 16:50:17.492824] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:28.662 [2024-11-05 16:50:17.493061] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107632 ] 00:11:28.920 [2024-11-05 16:50:17.658832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.178 [2024-11-05 16:50:17.870332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.096 16:50:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:31.096 00:11:31.097 SPDK Configuration: 00:11:31.097 Core mask: 0x1 00:11:31.097 00:11:31.097 Accel Perf Configuration: 00:11:31.097 Workload Type: decompress 00:11:31.097 Transfer size: 111250 bytes 00:11:31.097 Vector count 1 00:11:31.097 Module: software 00:11:31.097 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:31.097 Queue depth: 32 00:11:31.097 Allocate depth: 32 00:11:31.097 # threads/core: 1 00:11:31.097 Run time: 1 seconds 00:11:31.097 Verify: Yes 00:11:31.097 00:11:31.097 Running for 1 seconds... 00:11:31.097 00:11:31.097 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:31.097 ------------------------------------------------------------------------------------ 00:11:31.097 0,0 4992/s 206 MiB/s 0 0 00:11:31.097 ==================================================================================== 00:11:31.097 Total 4992/s 529 MiB/s 0 0' 00:11:31.097 16:50:19 -- accel/accel.sh@20 -- # IFS=: 00:11:31.097 16:50:19 -- accel/accel.sh@20 -- # read -r var val 00:11:31.097 16:50:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:31.097 16:50:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:31.097 16:50:19 -- accel/accel.sh@12 -- # build_accel_config 00:11:31.097 16:50:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:31.097 16:50:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:31.097 16:50:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:31.097 16:50:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:31.097 16:50:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:31.097 16:50:19 -- accel/accel.sh@41 -- # local IFS=, 00:11:31.097 16:50:19 -- accel/accel.sh@42 -- # jq -r . 00:11:31.097 [2024-11-05 16:50:19.874201] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:31.097 [2024-11-05 16:50:19.874391] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107669 ] 00:11:31.355 [2024-11-05 16:50:20.041093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.355 [2024-11-05 16:50:20.232770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.614 16:50:20 -- accel/accel.sh@21 -- # val= 00:11:31.614 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.614 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.614 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:31.614 16:50:20 -- accel/accel.sh@21 -- # val= 00:11:31.614 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.614 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.614 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:31.614 16:50:20 -- accel/accel.sh@21 -- # val= 00:11:31.614 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.614 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.614 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:31.614 16:50:20 -- accel/accel.sh@21 -- # val=0x1 00:11:31.614 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.614 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.614 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:31.614 16:50:20 -- accel/accel.sh@21 -- # val= 00:11:31.614 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.614 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.614 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:31.614 16:50:20 -- accel/accel.sh@21 -- # val= 00:11:31.614 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.614 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.614 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:31.614 16:50:20 -- accel/accel.sh@21 -- # val=decompress 00:11:31.614 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.614 16:50:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:31.614 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.614 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:31.615 16:50:20 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:31.615 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:31.615 16:50:20 -- accel/accel.sh@21 -- # val= 00:11:31.615 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:31.615 16:50:20 -- accel/accel.sh@21 -- # val=software 00:11:31.615 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.615 16:50:20 -- accel/accel.sh@23 -- # accel_module=software 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:31.615 16:50:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:31.615 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:31.615 16:50:20 -- accel/accel.sh@21 -- # val=32 00:11:31.615 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:31.615 16:50:20 -- accel/accel.sh@21 -- # val=32 00:11:31.615 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:31.615 16:50:20 -- accel/accel.sh@21 -- # val=1 00:11:31.615 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:31.615 16:50:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:31.615 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:31.615 16:50:20 -- accel/accel.sh@21 -- # val=Yes 00:11:31.615 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:31.615 16:50:20 -- accel/accel.sh@21 -- # val= 00:11:31.615 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:31.615 16:50:20 -- accel/accel.sh@21 -- # val= 00:11:31.615 16:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # IFS=: 00:11:31.615 16:50:20 -- accel/accel.sh@20 -- # read -r var val 00:11:33.517 16:50:22 -- accel/accel.sh@21 -- # val= 00:11:33.517 16:50:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.517 16:50:22 -- accel/accel.sh@20 -- # IFS=: 00:11:33.517 16:50:22 -- accel/accel.sh@20 -- # read -r var val 00:11:33.517 16:50:22 -- accel/accel.sh@21 -- # val= 00:11:33.517 16:50:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.517 16:50:22 -- accel/accel.sh@20 -- # IFS=: 00:11:33.517 16:50:22 -- accel/accel.sh@20 -- # read -r var val 00:11:33.517 16:50:22 -- accel/accel.sh@21 -- # val= 00:11:33.517 16:50:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.517 16:50:22 -- accel/accel.sh@20 -- # IFS=: 00:11:33.517 16:50:22 -- accel/accel.sh@20 -- # read -r var val 00:11:33.517 16:50:22 -- accel/accel.sh@21 -- # val= 00:11:33.517 16:50:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.517 16:50:22 -- accel/accel.sh@20 -- # IFS=: 00:11:33.517 16:50:22 -- accel/accel.sh@20 -- # read -r var val 00:11:33.517 16:50:22 -- accel/accel.sh@21 -- # val= 00:11:33.517 16:50:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.517 16:50:22 -- accel/accel.sh@20 -- # IFS=: 00:11:33.517 16:50:22 -- accel/accel.sh@20 -- # read -r var val 00:11:33.517 16:50:22 -- accel/accel.sh@21 -- # val= 00:11:33.517 16:50:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.517 16:50:22 -- accel/accel.sh@20 -- # IFS=: 00:11:33.517 16:50:22 -- accel/accel.sh@20 -- # read -r var val 00:11:33.517 16:50:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:33.517 16:50:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:33.517 16:50:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:33.517 00:11:33.517 real 0m4.769s 00:11:33.517 user 0m4.202s 00:11:33.517 sys 0m0.392s 00:11:33.517 16:50:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:33.517 ************************************ 00:11:33.517 END TEST accel_decmop_full 00:11:33.517 ************************************ 00:11:33.517 16:50:22 -- common/autotest_common.sh@10 -- # set +x 00:11:33.518 16:50:22 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:33.518 16:50:22 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:33.518 16:50:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:33.518 16:50:22 -- common/autotest_common.sh@10 -- # set +x 00:11:33.518 ************************************ 00:11:33.518 START TEST accel_decomp_mcore 00:11:33.518 ************************************ 00:11:33.518 16:50:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:33.518 16:50:22 -- accel/accel.sh@16 -- # local accel_opc 00:11:33.518 16:50:22 -- accel/accel.sh@17 -- # local accel_module 00:11:33.518 16:50:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:33.518 16:50:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:33.518 16:50:22 -- accel/accel.sh@12 -- # build_accel_config 00:11:33.518 16:50:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:33.518 16:50:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:33.518 16:50:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:33.518 16:50:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:33.518 16:50:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:33.518 16:50:22 -- accel/accel.sh@41 -- # local IFS=, 00:11:33.518 16:50:22 -- accel/accel.sh@42 -- # jq -r . 00:11:33.518 [2024-11-05 16:50:22.307537] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:33.518 [2024-11-05 16:50:22.307782] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107727 ] 00:11:33.776 [2024-11-05 16:50:22.491815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.776 [2024-11-05 16:50:22.656057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.776 [2024-11-05 16:50:22.656202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.776 [2024-11-05 16:50:22.656332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.776 [2024-11-05 16:50:22.656596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.308 16:50:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:36.308 00:11:36.308 SPDK Configuration: 00:11:36.308 Core mask: 0xf 00:11:36.308 00:11:36.308 Accel Perf Configuration: 00:11:36.308 Workload Type: decompress 00:11:36.308 Transfer size: 4096 bytes 00:11:36.308 Vector count 1 00:11:36.308 Module: software 00:11:36.308 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:36.308 Queue depth: 32 00:11:36.308 Allocate depth: 32 00:11:36.308 # threads/core: 1 00:11:36.308 Run time: 1 seconds 00:11:36.308 Verify: Yes 00:11:36.308 00:11:36.308 Running for 1 seconds... 00:11:36.308 00:11:36.308 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:36.308 ------------------------------------------------------------------------------------ 00:11:36.308 0,0 57216/s 105 MiB/s 0 0 00:11:36.308 3,0 59712/s 110 MiB/s 0 0 00:11:36.308 2,0 59808/s 110 MiB/s 0 0 00:11:36.308 1,0 59648/s 109 MiB/s 0 0 00:11:36.308 ==================================================================================== 00:11:36.308 Total 236384/s 923 MiB/s 0 0' 00:11:36.308 16:50:24 -- accel/accel.sh@20 -- # IFS=: 00:11:36.308 16:50:24 -- accel/accel.sh@20 -- # read -r var val 00:11:36.308 16:50:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:36.308 16:50:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:36.308 16:50:24 -- accel/accel.sh@12 -- # build_accel_config 00:11:36.308 16:50:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:36.308 16:50:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:36.308 16:50:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:36.308 16:50:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:36.308 16:50:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:36.308 16:50:24 -- accel/accel.sh@41 -- # local IFS=, 00:11:36.308 16:50:24 -- accel/accel.sh@42 -- # jq -r . 00:11:36.308 [2024-11-05 16:50:24.706384] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:36.308 [2024-11-05 16:50:24.706583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107765 ] 00:11:36.308 [2024-11-05 16:50:24.889786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.308 [2024-11-05 16:50:25.065555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.308 [2024-11-05 16:50:25.065673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.308 [2024-11-05 16:50:25.066135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.308 [2024-11-05 16:50:25.066141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val= 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val= 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val= 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val=0xf 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val= 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val= 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val=decompress 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val= 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val=software 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@23 -- # accel_module=software 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val=32 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val=32 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val=1 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val=Yes 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val= 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:36.569 16:50:25 -- accel/accel.sh@21 -- # val= 00:11:36.569 16:50:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # IFS=: 00:11:36.569 16:50:25 -- accel/accel.sh@20 -- # read -r var val 00:11:38.481 16:50:27 -- accel/accel.sh@21 -- # val= 00:11:38.481 16:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # IFS=: 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # read -r var val 00:11:38.481 16:50:27 -- accel/accel.sh@21 -- # val= 00:11:38.481 16:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # IFS=: 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # read -r var val 00:11:38.481 16:50:27 -- accel/accel.sh@21 -- # val= 00:11:38.481 16:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # IFS=: 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # read -r var val 00:11:38.481 16:50:27 -- accel/accel.sh@21 -- # val= 00:11:38.481 16:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # IFS=: 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # read -r var val 00:11:38.481 16:50:27 -- accel/accel.sh@21 -- # val= 00:11:38.481 16:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # IFS=: 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # read -r var val 00:11:38.481 16:50:27 -- accel/accel.sh@21 -- # val= 00:11:38.481 16:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # IFS=: 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # read -r var val 00:11:38.481 16:50:27 -- accel/accel.sh@21 -- # val= 00:11:38.481 16:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # IFS=: 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # read -r var val 00:11:38.481 16:50:27 -- accel/accel.sh@21 -- # val= 00:11:38.481 16:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # IFS=: 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # read -r var val 00:11:38.481 16:50:27 -- accel/accel.sh@21 -- # val= 00:11:38.481 16:50:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # IFS=: 00:11:38.481 16:50:27 -- accel/accel.sh@20 -- # read -r var val 00:11:38.481 16:50:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:38.481 16:50:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:38.481 16:50:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:38.481 00:11:38.481 real 0m4.823s 00:11:38.481 user 0m14.237s 00:11:38.481 sys 0m0.408s 00:11:38.482 ************************************ 00:11:38.482 END TEST accel_decomp_mcore 00:11:38.482 ************************************ 00:11:38.482 16:50:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:38.482 16:50:27 -- common/autotest_common.sh@10 -- # set +x 00:11:38.482 16:50:27 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:38.482 16:50:27 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:38.482 16:50:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:38.482 16:50:27 -- common/autotest_common.sh@10 -- # set +x 00:11:38.482 ************************************ 00:11:38.482 START TEST accel_decomp_full_mcore 00:11:38.482 ************************************ 00:11:38.482 16:50:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:38.482 16:50:27 -- accel/accel.sh@16 -- # local accel_opc 00:11:38.482 16:50:27 -- accel/accel.sh@17 -- # local accel_module 00:11:38.482 16:50:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:38.482 16:50:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:38.482 16:50:27 -- accel/accel.sh@12 -- # build_accel_config 00:11:38.482 16:50:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:38.482 16:50:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:38.482 16:50:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:38.482 16:50:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:38.482 16:50:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:38.482 16:50:27 -- accel/accel.sh@41 -- # local IFS=, 00:11:38.482 16:50:27 -- accel/accel.sh@42 -- # jq -r . 00:11:38.482 [2024-11-05 16:50:27.177057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:38.482 [2024-11-05 16:50:27.177745] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107813 ] 00:11:38.482 [2024-11-05 16:50:27.364260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.740 [2024-11-05 16:50:27.533852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.740 [2024-11-05 16:50:27.534014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.740 [2024-11-05 16:50:27.534101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.740 [2024-11-05 16:50:27.534103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.273 16:50:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:41.273 00:11:41.273 SPDK Configuration: 00:11:41.273 Core mask: 0xf 00:11:41.273 00:11:41.273 Accel Perf Configuration: 00:11:41.273 Workload Type: decompress 00:11:41.273 Transfer size: 111250 bytes 00:11:41.273 Vector count 1 00:11:41.273 Module: software 00:11:41.273 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:41.273 Queue depth: 32 00:11:41.273 Allocate depth: 32 00:11:41.273 # threads/core: 1 00:11:41.273 Run time: 1 seconds 00:11:41.273 Verify: Yes 00:11:41.273 00:11:41.273 Running for 1 seconds... 00:11:41.273 00:11:41.273 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:41.273 ------------------------------------------------------------------------------------ 00:11:41.273 0,0 5024/s 207 MiB/s 0 0 00:11:41.273 3,0 4960/s 204 MiB/s 0 0 00:11:41.273 2,0 4992/s 206 MiB/s 0 0 00:11:41.273 1,0 5024/s 207 MiB/s 0 0 00:11:41.273 ==================================================================================== 00:11:41.273 Total 20000/s 2121 MiB/s 0 0' 00:11:41.273 16:50:29 -- accel/accel.sh@20 -- # IFS=: 00:11:41.273 16:50:29 -- accel/accel.sh@20 -- # read -r var val 00:11:41.273 16:50:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:41.273 16:50:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:41.273 16:50:29 -- accel/accel.sh@12 -- # build_accel_config 00:11:41.273 16:50:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:41.273 16:50:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:41.273 16:50:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:41.273 16:50:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:41.273 16:50:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:41.273 16:50:29 -- accel/accel.sh@41 -- # local IFS=, 00:11:41.273 16:50:29 -- accel/accel.sh@42 -- # jq -r . 00:11:41.273 [2024-11-05 16:50:29.625874] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:41.273 [2024-11-05 16:50:29.626081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107860 ] 00:11:41.273 [2024-11-05 16:50:29.801075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.273 [2024-11-05 16:50:30.009174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.273 [2024-11-05 16:50:30.009319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.273 [2024-11-05 16:50:30.009420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.273 [2024-11-05 16:50:30.009424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val= 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val= 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val= 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val=0xf 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val= 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val= 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val=decompress 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val= 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val=software 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@23 -- # accel_module=software 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val=32 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val=32 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val=1 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val=Yes 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val= 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:41.532 16:50:30 -- accel/accel.sh@21 -- # val= 00:11:41.532 16:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # IFS=: 00:11:41.532 16:50:30 -- accel/accel.sh@20 -- # read -r var val 00:11:43.435 16:50:31 -- accel/accel.sh@21 -- # val= 00:11:43.435 16:50:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.435 16:50:31 -- accel/accel.sh@20 -- # IFS=: 00:11:43.435 16:50:31 -- accel/accel.sh@20 -- # read -r var val 00:11:43.435 16:50:31 -- accel/accel.sh@21 -- # val= 00:11:43.435 16:50:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.435 16:50:31 -- accel/accel.sh@20 -- # IFS=: 00:11:43.435 16:50:31 -- accel/accel.sh@20 -- # read -r var val 00:11:43.435 16:50:31 -- accel/accel.sh@21 -- # val= 00:11:43.435 16:50:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.435 16:50:31 -- accel/accel.sh@20 -- # IFS=: 00:11:43.435 16:50:31 -- accel/accel.sh@20 -- # read -r var val 00:11:43.435 16:50:31 -- accel/accel.sh@21 -- # val= 00:11:43.435 16:50:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.435 16:50:31 -- accel/accel.sh@20 -- # IFS=: 00:11:43.435 16:50:31 -- accel/accel.sh@20 -- # read -r var val 00:11:43.435 16:50:31 -- accel/accel.sh@21 -- # val= 00:11:43.435 16:50:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.435 16:50:31 -- accel/accel.sh@20 -- # IFS=: 00:11:43.435 16:50:31 -- accel/accel.sh@20 -- # read -r var val 00:11:43.435 16:50:31 -- accel/accel.sh@21 -- # val= 00:11:43.435 16:50:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.435 16:50:31 -- accel/accel.sh@20 -- # IFS=: 00:11:43.435 16:50:31 -- accel/accel.sh@20 -- # read -r var val 00:11:43.435 16:50:31 -- accel/accel.sh@21 -- # val= 00:11:43.435 16:50:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.435 16:50:31 -- accel/accel.sh@20 -- # IFS=: 00:11:43.435 16:50:31 -- accel/accel.sh@20 -- # read -r var val 00:11:43.435 16:50:31 -- accel/accel.sh@21 -- # val= 00:11:43.435 16:50:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.435 16:50:31 -- accel/accel.sh@20 -- # IFS=: 00:11:43.435 16:50:31 -- accel/accel.sh@20 -- # read -r var val 00:11:43.435 16:50:31 -- accel/accel.sh@21 -- # val= 00:11:43.436 16:50:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.436 16:50:31 -- accel/accel.sh@20 -- # IFS=: 00:11:43.436 16:50:31 -- accel/accel.sh@20 -- # read -r var val 00:11:43.436 16:50:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:43.436 16:50:32 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:43.436 16:50:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:43.436 00:11:43.436 real 0m4.882s 00:11:43.436 user 0m14.527s 00:11:43.436 sys 0m0.410s 00:11:43.436 16:50:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:43.436 ************************************ 00:11:43.436 END TEST accel_decomp_full_mcore 00:11:43.436 ************************************ 00:11:43.436 16:50:32 -- common/autotest_common.sh@10 -- # set +x 00:11:43.436 16:50:32 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:43.436 16:50:32 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:43.436 16:50:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:43.436 16:50:32 -- common/autotest_common.sh@10 -- # set +x 00:11:43.436 ************************************ 00:11:43.436 START TEST accel_decomp_mthread 00:11:43.436 ************************************ 00:11:43.436 16:50:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:43.436 16:50:32 -- accel/accel.sh@16 -- # local accel_opc 00:11:43.436 16:50:32 -- accel/accel.sh@17 -- # local accel_module 00:11:43.436 16:50:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:43.436 16:50:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:43.436 16:50:32 -- accel/accel.sh@12 -- # build_accel_config 00:11:43.436 16:50:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:43.436 16:50:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:43.436 16:50:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:43.436 16:50:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:43.436 16:50:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:43.436 16:50:32 -- accel/accel.sh@41 -- # local IFS=, 00:11:43.436 16:50:32 -- accel/accel.sh@42 -- # jq -r . 00:11:43.436 [2024-11-05 16:50:32.112930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:43.436 [2024-11-05 16:50:32.113122] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107908 ] 00:11:43.436 [2024-11-05 16:50:32.278849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.694 [2024-11-05 16:50:32.444113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.602 16:50:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:45.602 00:11:45.602 SPDK Configuration: 00:11:45.602 Core mask: 0x1 00:11:45.602 00:11:45.602 Accel Perf Configuration: 00:11:45.602 Workload Type: decompress 00:11:45.602 Transfer size: 4096 bytes 00:11:45.602 Vector count 1 00:11:45.602 Module: software 00:11:45.602 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:45.602 Queue depth: 32 00:11:45.602 Allocate depth: 32 00:11:45.602 # threads/core: 2 00:11:45.602 Run time: 1 seconds 00:11:45.602 Verify: Yes 00:11:45.602 00:11:45.602 Running for 1 seconds... 00:11:45.602 00:11:45.602 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:45.602 ------------------------------------------------------------------------------------ 00:11:45.602 0,1 36128/s 66 MiB/s 0 0 00:11:45.602 0,0 36032/s 66 MiB/s 0 0 00:11:45.602 ==================================================================================== 00:11:45.602 Total 72160/s 281 MiB/s 0 0' 00:11:45.602 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:45.602 16:50:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:45.602 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:45.602 16:50:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:45.602 16:50:34 -- accel/accel.sh@12 -- # build_accel_config 00:11:45.602 16:50:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:45.602 16:50:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:45.602 16:50:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:45.602 16:50:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:45.602 16:50:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:45.602 16:50:34 -- accel/accel.sh@41 -- # local IFS=, 00:11:45.602 16:50:34 -- accel/accel.sh@42 -- # jq -r . 00:11:45.602 [2024-11-05 16:50:34.373245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:45.602 [2024-11-05 16:50:34.373488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107954 ] 00:11:45.860 [2024-11-05 16:50:34.543466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.860 [2024-11-05 16:50:34.721188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val= 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val= 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val= 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val=0x1 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val= 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val= 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val=decompress 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val= 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val=software 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@23 -- # accel_module=software 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val=32 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val=32 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val=2 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val=Yes 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val= 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:46.118 16:50:34 -- accel/accel.sh@21 -- # val= 00:11:46.118 16:50:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # IFS=: 00:11:46.118 16:50:34 -- accel/accel.sh@20 -- # read -r var val 00:11:48.021 16:50:36 -- accel/accel.sh@21 -- # val= 00:11:48.021 16:50:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.021 16:50:36 -- accel/accel.sh@20 -- # IFS=: 00:11:48.021 16:50:36 -- accel/accel.sh@20 -- # read -r var val 00:11:48.021 16:50:36 -- accel/accel.sh@21 -- # val= 00:11:48.021 16:50:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.021 16:50:36 -- accel/accel.sh@20 -- # IFS=: 00:11:48.021 16:50:36 -- accel/accel.sh@20 -- # read -r var val 00:11:48.021 16:50:36 -- accel/accel.sh@21 -- # val= 00:11:48.021 16:50:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.021 16:50:36 -- accel/accel.sh@20 -- # IFS=: 00:11:48.021 16:50:36 -- accel/accel.sh@20 -- # read -r var val 00:11:48.021 16:50:36 -- accel/accel.sh@21 -- # val= 00:11:48.021 16:50:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.021 16:50:36 -- accel/accel.sh@20 -- # IFS=: 00:11:48.021 16:50:36 -- accel/accel.sh@20 -- # read -r var val 00:11:48.021 16:50:36 -- accel/accel.sh@21 -- # val= 00:11:48.021 16:50:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.021 16:50:36 -- accel/accel.sh@20 -- # IFS=: 00:11:48.021 16:50:36 -- accel/accel.sh@20 -- # read -r var val 00:11:48.021 16:50:36 -- accel/accel.sh@21 -- # val= 00:11:48.021 16:50:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.021 16:50:36 -- accel/accel.sh@20 -- # IFS=: 00:11:48.021 16:50:36 -- accel/accel.sh@20 -- # read -r var val 00:11:48.021 16:50:36 -- accel/accel.sh@21 -- # val= 00:11:48.021 16:50:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.021 16:50:36 -- accel/accel.sh@20 -- # IFS=: 00:11:48.021 16:50:36 -- accel/accel.sh@20 -- # read -r var val 00:11:48.021 16:50:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:48.021 16:50:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:48.021 16:50:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:48.021 00:11:48.021 real 0m4.602s 00:11:48.021 user 0m4.035s 00:11:48.021 sys 0m0.394s 00:11:48.021 16:50:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:48.021 ************************************ 00:11:48.021 END TEST accel_decomp_mthread 00:11:48.021 ************************************ 00:11:48.021 16:50:36 -- common/autotest_common.sh@10 -- # set +x 00:11:48.021 16:50:36 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:48.021 16:50:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:48.021 16:50:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:48.021 16:50:36 -- common/autotest_common.sh@10 -- # set +x 00:11:48.021 ************************************ 00:11:48.021 START TEST accel_deomp_full_mthread 00:11:48.021 ************************************ 00:11:48.021 16:50:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:48.021 16:50:36 -- accel/accel.sh@16 -- # local accel_opc 00:11:48.021 16:50:36 -- accel/accel.sh@17 -- # local accel_module 00:11:48.021 16:50:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:48.021 16:50:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:48.021 16:50:36 -- accel/accel.sh@12 -- # build_accel_config 00:11:48.021 16:50:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:48.021 16:50:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:48.021 16:50:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:48.021 16:50:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:48.021 16:50:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:48.021 16:50:36 -- accel/accel.sh@41 -- # local IFS=, 00:11:48.021 16:50:36 -- accel/accel.sh@42 -- # jq -r . 00:11:48.021 [2024-11-05 16:50:36.759271] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:48.021 [2024-11-05 16:50:36.759464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108000 ] 00:11:48.279 [2024-11-05 16:50:36.924959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.279 [2024-11-05 16:50:37.084775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.180 16:50:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:50.180 00:11:50.180 SPDK Configuration: 00:11:50.180 Core mask: 0x1 00:11:50.180 00:11:50.180 Accel Perf Configuration: 00:11:50.180 Workload Type: decompress 00:11:50.180 Transfer size: 111250 bytes 00:11:50.180 Vector count 1 00:11:50.180 Module: software 00:11:50.180 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:50.180 Queue depth: 32 00:11:50.180 Allocate depth: 32 00:11:50.180 # threads/core: 2 00:11:50.180 Run time: 1 seconds 00:11:50.180 Verify: Yes 00:11:50.180 00:11:50.180 Running for 1 seconds... 00:11:50.180 00:11:50.180 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:50.180 ------------------------------------------------------------------------------------ 00:11:50.180 0,1 2752/s 113 MiB/s 0 0 00:11:50.180 0,0 2688/s 111 MiB/s 0 0 00:11:50.180 ==================================================================================== 00:11:50.180 Total 5440/s 577 MiB/s 0 0' 00:11:50.180 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.180 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.180 16:50:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:50.180 16:50:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:50.180 16:50:39 -- accel/accel.sh@12 -- # build_accel_config 00:11:50.180 16:50:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:50.180 16:50:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:50.180 16:50:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:50.180 16:50:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:50.180 16:50:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:50.180 16:50:39 -- accel/accel.sh@41 -- # local IFS=, 00:11:50.180 16:50:39 -- accel/accel.sh@42 -- # jq -r . 00:11:50.438 [2024-11-05 16:50:39.080055] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:50.438 [2024-11-05 16:50:39.080253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108035 ] 00:11:50.438 [2024-11-05 16:50:39.242861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.696 [2024-11-05 16:50:39.423277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.966 16:50:39 -- accel/accel.sh@21 -- # val= 00:11:50.966 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.966 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.966 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.966 16:50:39 -- accel/accel.sh@21 -- # val= 00:11:50.966 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.966 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.966 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.966 16:50:39 -- accel/accel.sh@21 -- # val= 00:11:50.966 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.966 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.966 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.966 16:50:39 -- accel/accel.sh@21 -- # val=0x1 00:11:50.966 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.966 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.966 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.966 16:50:39 -- accel/accel.sh@21 -- # val= 00:11:50.966 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.966 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.966 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.966 16:50:39 -- accel/accel.sh@21 -- # val= 00:11:50.966 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.966 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.966 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.966 16:50:39 -- accel/accel.sh@21 -- # val=decompress 00:11:50.966 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.966 16:50:39 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.967 16:50:39 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:50.967 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.967 16:50:39 -- accel/accel.sh@21 -- # val= 00:11:50.967 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.967 16:50:39 -- accel/accel.sh@21 -- # val=software 00:11:50.967 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.967 16:50:39 -- accel/accel.sh@23 -- # accel_module=software 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.967 16:50:39 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:50.967 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.967 16:50:39 -- accel/accel.sh@21 -- # val=32 00:11:50.967 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.967 16:50:39 -- accel/accel.sh@21 -- # val=32 00:11:50.967 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.967 16:50:39 -- accel/accel.sh@21 -- # val=2 00:11:50.967 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.967 16:50:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:50.967 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.967 16:50:39 -- accel/accel.sh@21 -- # val=Yes 00:11:50.967 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.967 16:50:39 -- accel/accel.sh@21 -- # val= 00:11:50.967 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.967 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.968 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:50.968 16:50:39 -- accel/accel.sh@21 -- # val= 00:11:50.968 16:50:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.968 16:50:39 -- accel/accel.sh@20 -- # IFS=: 00:11:50.968 16:50:39 -- accel/accel.sh@20 -- # read -r var val 00:11:52.867 16:50:41 -- accel/accel.sh@21 -- # val= 00:11:52.867 16:50:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.867 16:50:41 -- accel/accel.sh@20 -- # IFS=: 00:11:52.867 16:50:41 -- accel/accel.sh@20 -- # read -r var val 00:11:52.867 16:50:41 -- accel/accel.sh@21 -- # val= 00:11:52.867 16:50:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.867 16:50:41 -- accel/accel.sh@20 -- # IFS=: 00:11:52.867 16:50:41 -- accel/accel.sh@20 -- # read -r var val 00:11:52.867 16:50:41 -- accel/accel.sh@21 -- # val= 00:11:52.867 16:50:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.867 16:50:41 -- accel/accel.sh@20 -- # IFS=: 00:11:52.867 16:50:41 -- accel/accel.sh@20 -- # read -r var val 00:11:52.867 16:50:41 -- accel/accel.sh@21 -- # val= 00:11:52.867 16:50:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.867 16:50:41 -- accel/accel.sh@20 -- # IFS=: 00:11:52.867 16:50:41 -- accel/accel.sh@20 -- # read -r var val 00:11:52.867 16:50:41 -- accel/accel.sh@21 -- # val= 00:11:52.867 16:50:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.867 16:50:41 -- accel/accel.sh@20 -- # IFS=: 00:11:52.867 16:50:41 -- accel/accel.sh@20 -- # read -r var val 00:11:52.867 16:50:41 -- accel/accel.sh@21 -- # val= 00:11:52.867 16:50:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.867 16:50:41 -- accel/accel.sh@20 -- # IFS=: 00:11:52.867 16:50:41 -- accel/accel.sh@20 -- # read -r var val 00:11:52.867 16:50:41 -- accel/accel.sh@21 -- # val= 00:11:52.867 16:50:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.867 16:50:41 -- accel/accel.sh@20 -- # IFS=: 00:11:52.867 16:50:41 -- accel/accel.sh@20 -- # read -r var val 00:11:52.867 16:50:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:52.867 16:50:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:52.868 16:50:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:52.868 00:11:52.868 real 0m4.683s 00:11:52.868 user 0m4.140s 00:11:52.868 sys 0m0.359s 00:11:52.868 ************************************ 00:11:52.868 END TEST accel_deomp_full_mthread 00:11:52.868 ************************************ 00:11:52.868 16:50:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:52.868 16:50:41 -- common/autotest_common.sh@10 -- # set +x 00:11:52.868 16:50:41 -- accel/accel.sh@116 -- # [[ n == y ]] 00:11:52.868 16:50:41 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:52.868 16:50:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:52.868 16:50:41 -- accel/accel.sh@129 -- # build_accel_config 00:11:52.868 16:50:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:52.868 16:50:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:52.868 16:50:41 -- common/autotest_common.sh@10 -- # set +x 00:11:52.868 16:50:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:52.868 16:50:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:52.868 16:50:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:52.868 16:50:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:52.868 16:50:41 -- accel/accel.sh@41 -- # local IFS=, 00:11:52.868 16:50:41 -- accel/accel.sh@42 -- # jq -r . 00:11:52.868 ************************************ 00:11:52.868 START TEST accel_dif_functional_tests 00:11:52.868 ************************************ 00:11:52.868 16:50:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:52.868 [2024-11-05 16:50:41.506410] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:52.868 [2024-11-05 16:50:41.506584] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108090 ] 00:11:52.868 [2024-11-05 16:50:41.669341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:53.125 [2024-11-05 16:50:41.833304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.125 [2024-11-05 16:50:41.833462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.125 [2024-11-05 16:50:41.833459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.382 00:11:53.382 00:11:53.382 CUnit - A unit testing framework for C - Version 2.1-3 00:11:53.382 http://cunit.sourceforge.net/ 00:11:53.382 00:11:53.382 00:11:53.382 Suite: accel_dif 00:11:53.382 Test: verify: DIF generated, GUARD check ...passed 00:11:53.382 Test: verify: DIF generated, APPTAG check ...passed 00:11:53.382 Test: verify: DIF generated, REFTAG check ...passed 00:11:53.382 Test: verify: DIF not generated, GUARD check ...[2024-11-05 16:50:42.114294] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:53.382 passed 00:11:53.382 Test: verify: DIF not generated, APPTAG check ...[2024-11-05 16:50:42.114430] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:53.382 [2024-11-05 16:50:42.114530] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:53.382 passed 00:11:53.382 Test: verify: DIF not generated, REFTAG check ...[2024-11-05 16:50:42.114591] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:53.382 [2024-11-05 16:50:42.114663] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:53.382 [2024-11-05 16:50:42.114723] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:53.382 passed 00:11:53.382 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:53.382 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-05 16:50:42.114917] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:53.382 passed 00:11:53.382 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:53.382 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:53.382 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:53.383 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-05 16:50:42.115228] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:53.383 passed 00:11:53.383 Test: generate copy: DIF generated, GUARD check ...passed 00:11:53.383 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:53.383 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:53.383 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:53.383 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:53.383 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:53.383 Test: generate copy: iovecs-len validate ...passed 00:11:53.383 Test: generate copy: buffer alignment validate ...[2024-11-05 16:50:42.115835] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:53.383 passed 00:11:53.383 00:11:53.383 Run Summary: Type Total Ran Passed Failed Inactive 00:11:53.383 suites 1 1 n/a 0 0 00:11:53.383 tests 20 20 20 0 0 00:11:53.383 asserts 204 204 204 0 n/a 00:11:53.383 00:11:53.383 Elapsed time = 0.009 seconds 00:11:54.317 00:11:54.317 real 0m1.641s 00:11:54.317 user 0m3.234s 00:11:54.317 sys 0m0.224s 00:11:54.317 ************************************ 00:11:54.317 END TEST accel_dif_functional_tests 00:11:54.317 ************************************ 00:11:54.317 16:50:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:54.317 16:50:43 -- common/autotest_common.sh@10 -- # set +x 00:11:54.317 ************************************ 00:11:54.317 END TEST accel 00:11:54.317 ************************************ 00:11:54.317 00:11:54.317 real 1m43.238s 00:11:54.317 user 1m52.839s 00:11:54.317 sys 0m9.406s 00:11:54.317 16:50:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:54.317 16:50:43 -- common/autotest_common.sh@10 -- # set +x 00:11:54.317 16:50:43 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:54.317 16:50:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:54.317 16:50:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:54.317 16:50:43 -- common/autotest_common.sh@10 -- # set +x 00:11:54.317 ************************************ 00:11:54.317 START TEST accel_rpc 00:11:54.317 ************************************ 00:11:54.317 16:50:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:54.576 * Looking for test storage... 00:11:54.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:54.576 16:50:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:54.576 16:50:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:54.576 16:50:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:54.576 16:50:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:54.576 16:50:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:54.576 16:50:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:54.576 16:50:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:54.576 16:50:43 -- scripts/common.sh@335 -- # IFS=.-: 00:11:54.576 16:50:43 -- scripts/common.sh@335 -- # read -ra ver1 00:11:54.576 16:50:43 -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.576 16:50:43 -- scripts/common.sh@336 -- # read -ra ver2 00:11:54.576 16:50:43 -- scripts/common.sh@337 -- # local 'op=<' 00:11:54.576 16:50:43 -- scripts/common.sh@339 -- # ver1_l=2 00:11:54.576 16:50:43 -- scripts/common.sh@340 -- # ver2_l=1 00:11:54.576 16:50:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:54.576 16:50:43 -- scripts/common.sh@343 -- # case "$op" in 00:11:54.576 16:50:43 -- scripts/common.sh@344 -- # : 1 00:11:54.576 16:50:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:54.576 16:50:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.576 16:50:43 -- scripts/common.sh@364 -- # decimal 1 00:11:54.576 16:50:43 -- scripts/common.sh@352 -- # local d=1 00:11:54.576 16:50:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.576 16:50:43 -- scripts/common.sh@354 -- # echo 1 00:11:54.576 16:50:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:54.576 16:50:43 -- scripts/common.sh@365 -- # decimal 2 00:11:54.576 16:50:43 -- scripts/common.sh@352 -- # local d=2 00:11:54.576 16:50:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.576 16:50:43 -- scripts/common.sh@354 -- # echo 2 00:11:54.576 16:50:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:54.576 16:50:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:54.576 16:50:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:54.576 16:50:43 -- scripts/common.sh@367 -- # return 0 00:11:54.576 16:50:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.577 16:50:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:54.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.577 --rc genhtml_branch_coverage=1 00:11:54.577 --rc genhtml_function_coverage=1 00:11:54.577 --rc genhtml_legend=1 00:11:54.577 --rc geninfo_all_blocks=1 00:11:54.577 --rc geninfo_unexecuted_blocks=1 00:11:54.577 00:11:54.577 ' 00:11:54.577 16:50:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:54.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.577 --rc genhtml_branch_coverage=1 00:11:54.577 --rc genhtml_function_coverage=1 00:11:54.577 --rc genhtml_legend=1 00:11:54.577 --rc geninfo_all_blocks=1 00:11:54.577 --rc geninfo_unexecuted_blocks=1 00:11:54.577 00:11:54.577 ' 00:11:54.577 16:50:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:54.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.577 --rc genhtml_branch_coverage=1 00:11:54.577 --rc genhtml_function_coverage=1 00:11:54.577 --rc genhtml_legend=1 00:11:54.577 --rc geninfo_all_blocks=1 00:11:54.577 --rc geninfo_unexecuted_blocks=1 00:11:54.577 00:11:54.577 ' 00:11:54.577 16:50:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:54.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.577 --rc genhtml_branch_coverage=1 00:11:54.577 --rc genhtml_function_coverage=1 00:11:54.577 --rc genhtml_legend=1 00:11:54.577 --rc geninfo_all_blocks=1 00:11:54.577 --rc geninfo_unexecuted_blocks=1 00:11:54.577 00:11:54.577 ' 00:11:54.577 16:50:43 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:54.577 16:50:43 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=108184 00:11:54.577 16:50:43 -- accel/accel_rpc.sh@15 -- # waitforlisten 108184 00:11:54.577 16:50:43 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:54.577 16:50:43 -- common/autotest_common.sh@829 -- # '[' -z 108184 ']' 00:11:54.577 16:50:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.577 16:50:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:54.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.577 16:50:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.577 16:50:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:54.577 16:50:43 -- common/autotest_common.sh@10 -- # set +x 00:11:54.577 [2024-11-05 16:50:43.417095] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:54.577 [2024-11-05 16:50:43.417345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108184 ] 00:11:54.835 [2024-11-05 16:50:43.589634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.123 [2024-11-05 16:50:43.754908] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:55.123 [2024-11-05 16:50:43.755195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.689 16:50:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:55.689 16:50:44 -- common/autotest_common.sh@862 -- # return 0 00:11:55.689 16:50:44 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:55.689 16:50:44 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:55.689 16:50:44 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:55.689 16:50:44 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:55.689 16:50:44 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:55.689 16:50:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:55.689 16:50:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:55.689 16:50:44 -- common/autotest_common.sh@10 -- # set +x 00:11:55.689 ************************************ 00:11:55.689 START TEST accel_assign_opcode 00:11:55.689 ************************************ 00:11:55.689 16:50:44 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:11:55.689 16:50:44 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:55.689 16:50:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.689 16:50:44 -- common/autotest_common.sh@10 -- # set +x 00:11:55.689 [2024-11-05 16:50:44.372063] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:55.689 16:50:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.689 16:50:44 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:55.689 16:50:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.690 16:50:44 -- common/autotest_common.sh@10 -- # set +x 00:11:55.690 [2024-11-05 16:50:44.380041] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:55.690 16:50:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.690 16:50:44 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:55.690 16:50:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.690 16:50:44 -- common/autotest_common.sh@10 -- # set +x 00:11:56.257 16:50:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.257 16:50:45 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:56.257 16:50:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.257 16:50:45 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:56.257 16:50:45 -- common/autotest_common.sh@10 -- # set +x 00:11:56.257 16:50:45 -- accel/accel_rpc.sh@42 -- # grep software 00:11:56.257 16:50:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.257 software 00:11:56.257 00:11:56.257 real 0m0.709s 00:11:56.257 user 0m0.054s 00:11:56.257 sys 0m0.010s 00:11:56.257 16:50:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:56.257 ************************************ 00:11:56.257 END TEST accel_assign_opcode 00:11:56.257 ************************************ 00:11:56.257 16:50:45 -- common/autotest_common.sh@10 -- # set +x 00:11:56.257 16:50:45 -- accel/accel_rpc.sh@55 -- # killprocess 108184 00:11:56.257 16:50:45 -- common/autotest_common.sh@936 -- # '[' -z 108184 ']' 00:11:56.257 16:50:45 -- common/autotest_common.sh@940 -- # kill -0 108184 00:11:56.257 16:50:45 -- common/autotest_common.sh@941 -- # uname 00:11:56.257 16:50:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:56.257 16:50:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108184 00:11:56.257 16:50:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:56.257 killing process with pid 108184 00:11:56.257 16:50:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:56.257 16:50:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 108184' 00:11:56.257 16:50:45 -- common/autotest_common.sh@955 -- # kill 108184 00:11:56.257 16:50:45 -- common/autotest_common.sh@960 -- # wait 108184 00:11:58.162 00:11:58.162 real 0m3.718s 00:11:58.162 user 0m3.691s 00:11:58.162 sys 0m0.571s 00:11:58.162 16:50:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:58.162 ************************************ 00:11:58.162 END TEST accel_rpc 00:11:58.162 ************************************ 00:11:58.162 16:50:46 -- common/autotest_common.sh@10 -- # set +x 00:11:58.162 16:50:46 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:58.162 16:50:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:58.162 16:50:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:58.162 16:50:46 -- common/autotest_common.sh@10 -- # set +x 00:11:58.162 ************************************ 00:11:58.162 START TEST app_cmdline 00:11:58.162 ************************************ 00:11:58.162 16:50:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:58.162 * Looking for test storage... 00:11:58.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:58.162 16:50:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:58.162 16:50:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:58.162 16:50:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:58.421 16:50:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:58.421 16:50:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:58.421 16:50:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:58.421 16:50:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:58.421 16:50:47 -- scripts/common.sh@335 -- # IFS=.-: 00:11:58.421 16:50:47 -- scripts/common.sh@335 -- # read -ra ver1 00:11:58.421 16:50:47 -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.421 16:50:47 -- scripts/common.sh@336 -- # read -ra ver2 00:11:58.421 16:50:47 -- scripts/common.sh@337 -- # local 'op=<' 00:11:58.421 16:50:47 -- scripts/common.sh@339 -- # ver1_l=2 00:11:58.421 16:50:47 -- scripts/common.sh@340 -- # ver2_l=1 00:11:58.421 16:50:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:58.421 16:50:47 -- scripts/common.sh@343 -- # case "$op" in 00:11:58.421 16:50:47 -- scripts/common.sh@344 -- # : 1 00:11:58.421 16:50:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:58.421 16:50:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.421 16:50:47 -- scripts/common.sh@364 -- # decimal 1 00:11:58.421 16:50:47 -- scripts/common.sh@352 -- # local d=1 00:11:58.421 16:50:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.421 16:50:47 -- scripts/common.sh@354 -- # echo 1 00:11:58.421 16:50:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:58.421 16:50:47 -- scripts/common.sh@365 -- # decimal 2 00:11:58.421 16:50:47 -- scripts/common.sh@352 -- # local d=2 00:11:58.421 16:50:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.421 16:50:47 -- scripts/common.sh@354 -- # echo 2 00:11:58.421 16:50:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:58.422 16:50:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:58.422 16:50:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:58.422 16:50:47 -- scripts/common.sh@367 -- # return 0 00:11:58.422 16:50:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.422 16:50:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:58.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.422 --rc genhtml_branch_coverage=1 00:11:58.422 --rc genhtml_function_coverage=1 00:11:58.422 --rc genhtml_legend=1 00:11:58.422 --rc geninfo_all_blocks=1 00:11:58.422 --rc geninfo_unexecuted_blocks=1 00:11:58.422 00:11:58.422 ' 00:11:58.422 16:50:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:58.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.422 --rc genhtml_branch_coverage=1 00:11:58.422 --rc genhtml_function_coverage=1 00:11:58.422 --rc genhtml_legend=1 00:11:58.422 --rc geninfo_all_blocks=1 00:11:58.422 --rc geninfo_unexecuted_blocks=1 00:11:58.422 00:11:58.422 ' 00:11:58.422 16:50:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:58.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.422 --rc genhtml_branch_coverage=1 00:11:58.422 --rc genhtml_function_coverage=1 00:11:58.422 --rc genhtml_legend=1 00:11:58.422 --rc geninfo_all_blocks=1 00:11:58.422 --rc geninfo_unexecuted_blocks=1 00:11:58.422 00:11:58.422 ' 00:11:58.422 16:50:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:58.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.422 --rc genhtml_branch_coverage=1 00:11:58.422 --rc genhtml_function_coverage=1 00:11:58.422 --rc genhtml_legend=1 00:11:58.422 --rc geninfo_all_blocks=1 00:11:58.422 --rc geninfo_unexecuted_blocks=1 00:11:58.422 00:11:58.422 ' 00:11:58.422 16:50:47 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:58.422 16:50:47 -- app/cmdline.sh@17 -- # spdk_tgt_pid=108317 00:11:58.422 16:50:47 -- app/cmdline.sh@18 -- # waitforlisten 108317 00:11:58.422 16:50:47 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:58.422 16:50:47 -- common/autotest_common.sh@829 -- # '[' -z 108317 ']' 00:11:58.422 16:50:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.422 16:50:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:58.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.422 16:50:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.422 16:50:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:58.422 16:50:47 -- common/autotest_common.sh@10 -- # set +x 00:11:58.422 [2024-11-05 16:50:47.196291] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:58.422 [2024-11-05 16:50:47.196524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108317 ] 00:11:58.680 [2024-11-05 16:50:47.366450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.680 [2024-11-05 16:50:47.528118] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:58.680 [2024-11-05 16:50:47.528371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.060 16:50:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:00.060 16:50:48 -- common/autotest_common.sh@862 -- # return 0 00:12:00.060 16:50:48 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:00.319 { 00:12:00.319 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:12:00.319 "fields": { 00:12:00.319 "major": 24, 00:12:00.319 "minor": 1, 00:12:00.319 "patch": 1, 00:12:00.319 "suffix": "-pre", 00:12:00.319 "commit": "c13c99a5e" 00:12:00.319 } 00:12:00.319 } 00:12:00.319 16:50:49 -- app/cmdline.sh@22 -- # expected_methods=() 00:12:00.319 16:50:49 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:00.319 16:50:49 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:00.319 16:50:49 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:00.319 16:50:49 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:00.319 16:50:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.319 16:50:49 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:00.319 16:50:49 -- common/autotest_common.sh@10 -- # set +x 00:12:00.319 16:50:49 -- app/cmdline.sh@26 -- # sort 00:12:00.319 16:50:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.319 16:50:49 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:00.319 16:50:49 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:00.319 16:50:49 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:00.319 16:50:49 -- common/autotest_common.sh@650 -- # local es=0 00:12:00.319 16:50:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:00.319 16:50:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:00.319 16:50:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.319 16:50:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:00.319 16:50:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.319 16:50:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:00.319 16:50:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.319 16:50:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:00.319 16:50:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:00.319 16:50:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:00.578 request: 00:12:00.578 { 00:12:00.578 "method": "env_dpdk_get_mem_stats", 00:12:00.578 "req_id": 1 00:12:00.578 } 00:12:00.578 Got JSON-RPC error response 00:12:00.578 response: 00:12:00.578 { 00:12:00.578 "code": -32601, 00:12:00.578 "message": "Method not found" 00:12:00.578 } 00:12:00.578 16:50:49 -- common/autotest_common.sh@653 -- # es=1 00:12:00.578 16:50:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:00.578 16:50:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:00.578 16:50:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:00.578 16:50:49 -- app/cmdline.sh@1 -- # killprocess 108317 00:12:00.578 16:50:49 -- common/autotest_common.sh@936 -- # '[' -z 108317 ']' 00:12:00.578 16:50:49 -- common/autotest_common.sh@940 -- # kill -0 108317 00:12:00.578 16:50:49 -- common/autotest_common.sh@941 -- # uname 00:12:00.578 16:50:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:00.578 16:50:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108317 00:12:00.578 16:50:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:00.578 killing process with pid 108317 00:12:00.578 16:50:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:00.578 16:50:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 108317' 00:12:00.578 16:50:49 -- common/autotest_common.sh@955 -- # kill 108317 00:12:00.578 16:50:49 -- common/autotest_common.sh@960 -- # wait 108317 00:12:02.479 00:12:02.480 real 0m4.278s 00:12:02.480 user 0m4.902s 00:12:02.480 sys 0m0.596s 00:12:02.480 ************************************ 00:12:02.480 END TEST app_cmdline 00:12:02.480 ************************************ 00:12:02.480 16:50:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:02.480 16:50:51 -- common/autotest_common.sh@10 -- # set +x 00:12:02.480 16:50:51 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:02.480 16:50:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:02.480 16:50:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:02.480 16:50:51 -- common/autotest_common.sh@10 -- # set +x 00:12:02.480 ************************************ 00:12:02.480 START TEST version 00:12:02.480 ************************************ 00:12:02.480 16:50:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:02.480 * Looking for test storage... 00:12:02.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:02.480 16:50:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:02.480 16:50:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:02.480 16:50:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:02.739 16:50:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:02.739 16:50:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:02.739 16:50:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:02.739 16:50:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:02.739 16:50:51 -- scripts/common.sh@335 -- # IFS=.-: 00:12:02.739 16:50:51 -- scripts/common.sh@335 -- # read -ra ver1 00:12:02.739 16:50:51 -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.739 16:50:51 -- scripts/common.sh@336 -- # read -ra ver2 00:12:02.739 16:50:51 -- scripts/common.sh@337 -- # local 'op=<' 00:12:02.739 16:50:51 -- scripts/common.sh@339 -- # ver1_l=2 00:12:02.739 16:50:51 -- scripts/common.sh@340 -- # ver2_l=1 00:12:02.739 16:50:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:02.739 16:50:51 -- scripts/common.sh@343 -- # case "$op" in 00:12:02.739 16:50:51 -- scripts/common.sh@344 -- # : 1 00:12:02.739 16:50:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:02.739 16:50:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.739 16:50:51 -- scripts/common.sh@364 -- # decimal 1 00:12:02.739 16:50:51 -- scripts/common.sh@352 -- # local d=1 00:12:02.739 16:50:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.739 16:50:51 -- scripts/common.sh@354 -- # echo 1 00:12:02.739 16:50:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:02.739 16:50:51 -- scripts/common.sh@365 -- # decimal 2 00:12:02.739 16:50:51 -- scripts/common.sh@352 -- # local d=2 00:12:02.739 16:50:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.739 16:50:51 -- scripts/common.sh@354 -- # echo 2 00:12:02.739 16:50:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:02.739 16:50:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:02.739 16:50:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:02.739 16:50:51 -- scripts/common.sh@367 -- # return 0 00:12:02.739 16:50:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.739 16:50:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:02.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.739 --rc genhtml_branch_coverage=1 00:12:02.739 --rc genhtml_function_coverage=1 00:12:02.739 --rc genhtml_legend=1 00:12:02.739 --rc geninfo_all_blocks=1 00:12:02.739 --rc geninfo_unexecuted_blocks=1 00:12:02.739 00:12:02.739 ' 00:12:02.739 16:50:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:02.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.739 --rc genhtml_branch_coverage=1 00:12:02.739 --rc genhtml_function_coverage=1 00:12:02.739 --rc genhtml_legend=1 00:12:02.739 --rc geninfo_all_blocks=1 00:12:02.739 --rc geninfo_unexecuted_blocks=1 00:12:02.739 00:12:02.739 ' 00:12:02.739 16:50:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:02.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.739 --rc genhtml_branch_coverage=1 00:12:02.739 --rc genhtml_function_coverage=1 00:12:02.739 --rc genhtml_legend=1 00:12:02.739 --rc geninfo_all_blocks=1 00:12:02.739 --rc geninfo_unexecuted_blocks=1 00:12:02.739 00:12:02.739 ' 00:12:02.739 16:50:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:02.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.739 --rc genhtml_branch_coverage=1 00:12:02.739 --rc genhtml_function_coverage=1 00:12:02.739 --rc genhtml_legend=1 00:12:02.739 --rc geninfo_all_blocks=1 00:12:02.739 --rc geninfo_unexecuted_blocks=1 00:12:02.739 00:12:02.739 ' 00:12:02.739 16:50:51 -- app/version.sh@17 -- # get_header_version major 00:12:02.739 16:50:51 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:02.739 16:50:51 -- app/version.sh@14 -- # cut -f2 00:12:02.739 16:50:51 -- app/version.sh@14 -- # tr -d '"' 00:12:02.739 16:50:51 -- app/version.sh@17 -- # major=24 00:12:02.739 16:50:51 -- app/version.sh@18 -- # get_header_version minor 00:12:02.739 16:50:51 -- app/version.sh@14 -- # cut -f2 00:12:02.739 16:50:51 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:02.739 16:50:51 -- app/version.sh@14 -- # tr -d '"' 00:12:02.739 16:50:51 -- app/version.sh@18 -- # minor=1 00:12:02.739 16:50:51 -- app/version.sh@19 -- # get_header_version patch 00:12:02.739 16:50:51 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:02.739 16:50:51 -- app/version.sh@14 -- # cut -f2 00:12:02.739 16:50:51 -- app/version.sh@14 -- # tr -d '"' 00:12:02.739 16:50:51 -- app/version.sh@19 -- # patch=1 00:12:02.739 16:50:51 -- app/version.sh@20 -- # get_header_version suffix 00:12:02.739 16:50:51 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:02.739 16:50:51 -- app/version.sh@14 -- # cut -f2 00:12:02.739 16:50:51 -- app/version.sh@14 -- # tr -d '"' 00:12:02.739 16:50:51 -- app/version.sh@20 -- # suffix=-pre 00:12:02.739 16:50:51 -- app/version.sh@22 -- # version=24.1 00:12:02.740 16:50:51 -- app/version.sh@25 -- # (( patch != 0 )) 00:12:02.740 16:50:51 -- app/version.sh@25 -- # version=24.1.1 00:12:02.740 16:50:51 -- app/version.sh@28 -- # version=24.1.1rc0 00:12:02.740 16:50:51 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:02.740 16:50:51 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:02.740 16:50:51 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:12:02.740 16:50:51 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:12:02.740 00:12:02.740 real 0m0.224s 00:12:02.740 user 0m0.178s 00:12:02.740 sys 0m0.083s 00:12:02.740 ************************************ 00:12:02.740 END TEST version 00:12:02.740 ************************************ 00:12:02.740 16:50:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:02.740 16:50:51 -- common/autotest_common.sh@10 -- # set +x 00:12:02.740 16:50:51 -- spdk/autotest.sh@181 -- # '[' 1 -eq 1 ']' 00:12:02.740 16:50:51 -- spdk/autotest.sh@182 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:02.740 16:50:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:02.740 16:50:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:02.740 16:50:51 -- common/autotest_common.sh@10 -- # set +x 00:12:02.740 ************************************ 00:12:02.740 START TEST blockdev_general 00:12:02.740 ************************************ 00:12:02.740 16:50:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:02.740 * Looking for test storage... 00:12:02.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:02.740 16:50:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:02.740 16:50:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:02.740 16:50:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:02.999 16:50:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:02.999 16:50:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:02.999 16:50:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:02.999 16:50:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:02.999 16:50:51 -- scripts/common.sh@335 -- # IFS=.-: 00:12:02.999 16:50:51 -- scripts/common.sh@335 -- # read -ra ver1 00:12:02.999 16:50:51 -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.999 16:50:51 -- scripts/common.sh@336 -- # read -ra ver2 00:12:02.999 16:50:51 -- scripts/common.sh@337 -- # local 'op=<' 00:12:02.999 16:50:51 -- scripts/common.sh@339 -- # ver1_l=2 00:12:02.999 16:50:51 -- scripts/common.sh@340 -- # ver2_l=1 00:12:02.999 16:50:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:02.999 16:50:51 -- scripts/common.sh@343 -- # case "$op" in 00:12:02.999 16:50:51 -- scripts/common.sh@344 -- # : 1 00:12:02.999 16:50:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:02.999 16:50:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.999 16:50:51 -- scripts/common.sh@364 -- # decimal 1 00:12:02.999 16:50:51 -- scripts/common.sh@352 -- # local d=1 00:12:02.999 16:50:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.999 16:50:51 -- scripts/common.sh@354 -- # echo 1 00:12:02.999 16:50:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:02.999 16:50:51 -- scripts/common.sh@365 -- # decimal 2 00:12:02.999 16:50:51 -- scripts/common.sh@352 -- # local d=2 00:12:02.999 16:50:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.999 16:50:51 -- scripts/common.sh@354 -- # echo 2 00:12:02.999 16:50:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:02.999 16:50:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:02.999 16:50:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:02.999 16:50:51 -- scripts/common.sh@367 -- # return 0 00:12:02.999 16:50:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.999 16:50:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:02.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.999 --rc genhtml_branch_coverage=1 00:12:02.999 --rc genhtml_function_coverage=1 00:12:02.999 --rc genhtml_legend=1 00:12:02.999 --rc geninfo_all_blocks=1 00:12:02.999 --rc geninfo_unexecuted_blocks=1 00:12:02.999 00:12:02.999 ' 00:12:02.999 16:50:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:02.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.999 --rc genhtml_branch_coverage=1 00:12:02.999 --rc genhtml_function_coverage=1 00:12:02.999 --rc genhtml_legend=1 00:12:02.999 --rc geninfo_all_blocks=1 00:12:02.999 --rc geninfo_unexecuted_blocks=1 00:12:02.999 00:12:02.999 ' 00:12:02.999 16:50:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:02.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.999 --rc genhtml_branch_coverage=1 00:12:02.999 --rc genhtml_function_coverage=1 00:12:02.999 --rc genhtml_legend=1 00:12:02.999 --rc geninfo_all_blocks=1 00:12:02.999 --rc geninfo_unexecuted_blocks=1 00:12:02.999 00:12:02.999 ' 00:12:02.999 16:50:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:02.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.999 --rc genhtml_branch_coverage=1 00:12:02.999 --rc genhtml_function_coverage=1 00:12:02.999 --rc genhtml_legend=1 00:12:02.999 --rc geninfo_all_blocks=1 00:12:02.999 --rc geninfo_unexecuted_blocks=1 00:12:02.999 00:12:02.999 ' 00:12:02.999 16:50:51 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:02.999 16:50:51 -- bdev/nbd_common.sh@6 -- # set -e 00:12:02.999 16:50:51 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:02.999 16:50:51 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:02.999 16:50:51 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:02.999 16:50:51 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:02.999 16:50:51 -- bdev/blockdev.sh@18 -- # : 00:12:02.999 16:50:51 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:02.999 16:50:51 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:02.999 16:50:51 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:02.999 16:50:51 -- bdev/blockdev.sh@672 -- # uname -s 00:12:02.999 16:50:51 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:02.999 16:50:51 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:02.999 16:50:51 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:12:02.999 16:50:51 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:02.999 16:50:51 -- bdev/blockdev.sh@682 -- # dek= 00:12:02.999 16:50:51 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:02.999 16:50:51 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:02.999 16:50:51 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:02.999 16:50:51 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:12:02.999 16:50:51 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:12:02.999 16:50:51 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:02.999 16:50:51 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=108523 00:12:02.999 16:50:51 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:12:02.999 16:50:51 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:02.999 16:50:51 -- bdev/blockdev.sh@47 -- # waitforlisten 108523 00:12:02.999 16:50:51 -- common/autotest_common.sh@829 -- # '[' -z 108523 ']' 00:12:02.999 16:50:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.999 16:50:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:02.999 16:50:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.999 16:50:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:02.999 16:50:51 -- common/autotest_common.sh@10 -- # set +x 00:12:02.999 [2024-11-05 16:50:51.797142] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:02.999 [2024-11-05 16:50:51.797354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108523 ] 00:12:03.270 [2024-11-05 16:50:51.962940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.270 [2024-11-05 16:50:52.140669] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:03.270 [2024-11-05 16:50:52.140911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.850 16:50:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:03.850 16:50:52 -- common/autotest_common.sh@862 -- # return 0 00:12:03.850 16:50:52 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:03.850 16:50:52 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:12:03.850 16:50:52 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:12:03.850 16:50:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.850 16:50:52 -- common/autotest_common.sh@10 -- # set +x 00:12:04.783 [2024-11-05 16:50:53.410321] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:04.783 [2024-11-05 16:50:53.410443] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:04.783 00:12:04.783 [2024-11-05 16:50:53.418296] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:04.783 [2024-11-05 16:50:53.418390] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:04.783 00:12:04.783 Malloc0 00:12:04.783 Malloc1 00:12:04.783 Malloc2 00:12:04.783 Malloc3 00:12:04.783 Malloc4 00:12:04.783 Malloc5 00:12:05.041 Malloc6 00:12:05.041 Malloc7 00:12:05.041 Malloc8 00:12:05.041 Malloc9 00:12:05.041 [2024-11-05 16:50:53.780621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:05.041 [2024-11-05 16:50:53.780744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.041 [2024-11-05 16:50:53.780780] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:12:05.041 [2024-11-05 16:50:53.780819] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.041 [2024-11-05 16:50:53.783148] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.041 [2024-11-05 16:50:53.783237] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:05.041 TestPT 00:12:05.041 16:50:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.041 16:50:53 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:12:05.041 5000+0 records in 00:12:05.041 5000+0 records out 00:12:05.041 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0274014 s, 374 MB/s 00:12:05.041 16:50:53 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:12:05.041 16:50:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.041 16:50:53 -- common/autotest_common.sh@10 -- # set +x 00:12:05.041 AIO0 00:12:05.041 16:50:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.041 16:50:53 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:05.041 16:50:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.041 16:50:53 -- common/autotest_common.sh@10 -- # set +x 00:12:05.041 16:50:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.041 16:50:53 -- bdev/blockdev.sh@738 -- # cat 00:12:05.041 16:50:53 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:05.041 16:50:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.041 16:50:53 -- common/autotest_common.sh@10 -- # set +x 00:12:05.041 16:50:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.041 16:50:53 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:05.041 16:50:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.041 16:50:53 -- common/autotest_common.sh@10 -- # set +x 00:12:05.300 16:50:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.300 16:50:53 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:05.300 16:50:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.300 16:50:53 -- common/autotest_common.sh@10 -- # set +x 00:12:05.300 16:50:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.300 16:50:53 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:05.300 16:50:53 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:05.300 16:50:53 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:05.300 16:50:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.300 16:50:53 -- common/autotest_common.sh@10 -- # set +x 00:12:05.300 16:50:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.300 16:50:54 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:05.300 16:50:54 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:05.301 16:50:54 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "4e12e0fb-d81f-4b2f-8504-6bfa62665d5d"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4e12e0fb-d81f-4b2f-8504-6bfa62665d5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f35f2b77-53a5-51c9-b1c0-77c12c430197"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f35f2b77-53a5-51c9-b1c0-77c12c430197",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "23dbd07a-ad13-5aca-8d1d-0b717ea89880"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "23dbd07a-ad13-5aca-8d1d-0b717ea89880",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "4be6b632-edf3-5c86-80e3-78ab731ddc60"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4be6b632-edf3-5c86-80e3-78ab731ddc60",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "ca101dd3-1e1c-59d9-ae30-0a377e858687"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ca101dd3-1e1c-59d9-ae30-0a377e858687",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "45f4ebc0-b496-5fbf-a319-73f544d65faf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "45f4ebc0-b496-5fbf-a319-73f544d65faf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "38316be1-1e51-5b68-ab60-d2c8ee12e3c1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "38316be1-1e51-5b68-ab60-d2c8ee12e3c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "5899ac10-08d2-5ced-b936-b693b46c9b42"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5899ac10-08d2-5ced-b936-b693b46c9b42",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "039600ae-153f-5357-8010-4c756a4def25"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "039600ae-153f-5357-8010-4c756a4def25",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "c49b79c9-6f58-5321-9158-bd3e71ac16eb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c49b79c9-6f58-5321-9158-bd3e71ac16eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "7c4a1f13-6856-5380-8056-b951f33d017d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7c4a1f13-6856-5380-8056-b951f33d017d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "97aafead-569b-5716-9832-5c8bc06a2bce"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "97aafead-569b-5716-9832-5c8bc06a2bce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "b5747af3-280b-42bc-97eb-37367a575557"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b5747af3-280b-42bc-97eb-37367a575557",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b5747af3-280b-42bc-97eb-37367a575557",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "7ed1fc83-6142-4d98-8929-27be6d796fb4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "ccc30e1b-0385-49eb-8cd3-04d8e441b556",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "32dff25a-8393-4243-beb8-e5e01a7fba76"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "32dff25a-8393-4243-beb8-e5e01a7fba76",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "32dff25a-8393-4243-beb8-e5e01a7fba76",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "47eec94e-b2bc-4792-bc79-fc2b46133391",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "aff1333d-79bf-4cfe-a3b6-fa88efab46ef",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "2648913c-741e-40b7-9168-a285a4bc74d6"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2648913c-741e-40b7-9168-a285a4bc74d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2648913c-741e-40b7-9168-a285a4bc74d6",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "6ba833c9-5d7d-4bcf-8bf0-1e4b30a02dcd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "6aacdb4d-c674-4bcd-9fb2-dd1509ff86d6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "0208e279-a692-45a2-bfa5-0fc944d27fee"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "0208e279-a692-45a2-bfa5-0fc944d27fee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:05.301 16:50:54 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:05.301 16:50:54 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:12:05.301 16:50:54 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:05.301 16:50:54 -- bdev/blockdev.sh@752 -- # killprocess 108523 00:12:05.301 16:50:54 -- common/autotest_common.sh@936 -- # '[' -z 108523 ']' 00:12:05.301 16:50:54 -- common/autotest_common.sh@940 -- # kill -0 108523 00:12:05.301 16:50:54 -- common/autotest_common.sh@941 -- # uname 00:12:05.301 16:50:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:05.301 16:50:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108523 00:12:05.301 16:50:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:05.301 16:50:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:05.301 killing process with pid 108523 00:12:05.301 16:50:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 108523' 00:12:05.301 16:50:54 -- common/autotest_common.sh@955 -- # kill 108523 00:12:05.301 16:50:54 -- common/autotest_common.sh@960 -- # wait 108523 00:12:07.864 16:50:56 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:07.864 16:50:56 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:07.864 16:50:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:07.864 16:50:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:07.864 16:50:56 -- common/autotest_common.sh@10 -- # set +x 00:12:07.864 ************************************ 00:12:07.864 START TEST bdev_hello_world 00:12:07.864 ************************************ 00:12:07.864 16:50:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:07.864 [2024-11-05 16:50:56.672500] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:07.864 [2024-11-05 16:50:56.672722] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108604 ] 00:12:08.122 [2024-11-05 16:50:56.838817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.122 [2024-11-05 16:50:56.995343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.688 [2024-11-05 16:50:57.320124] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:08.688 [2024-11-05 16:50:57.320256] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:08.688 [2024-11-05 16:50:57.328083] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:08.688 [2024-11-05 16:50:57.328188] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:08.688 [2024-11-05 16:50:57.336099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:08.688 [2024-11-05 16:50:57.336172] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:08.688 [2024-11-05 16:50:57.336217] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:08.688 [2024-11-05 16:50:57.506734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:08.688 [2024-11-05 16:50:57.506896] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.688 [2024-11-05 16:50:57.506963] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:08.688 [2024-11-05 16:50:57.506993] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.688 [2024-11-05 16:50:57.509220] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.688 [2024-11-05 16:50:57.509290] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:08.946 [2024-11-05 16:50:57.804260] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:08.946 [2024-11-05 16:50:57.804363] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:12:08.946 [2024-11-05 16:50:57.804466] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:08.946 [2024-11-05 16:50:57.804540] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:08.946 [2024-11-05 16:50:57.804658] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:08.946 [2024-11-05 16:50:57.804699] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:08.946 [2024-11-05 16:50:57.804787] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:08.946 00:12:08.946 [2024-11-05 16:50:57.804841] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:10.859 00:12:10.859 real 0m2.813s 00:12:10.859 user 0m2.288s 00:12:10.859 sys 0m0.369s 00:12:10.859 ************************************ 00:12:10.859 END TEST bdev_hello_world 00:12:10.859 ************************************ 00:12:10.859 16:50:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:10.859 16:50:59 -- common/autotest_common.sh@10 -- # set +x 00:12:10.859 16:50:59 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:10.859 16:50:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:10.859 16:50:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:10.859 16:50:59 -- common/autotest_common.sh@10 -- # set +x 00:12:10.860 ************************************ 00:12:10.860 START TEST bdev_bounds 00:12:10.860 ************************************ 00:12:10.860 16:50:59 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:12:10.860 16:50:59 -- bdev/blockdev.sh@288 -- # bdevio_pid=108661 00:12:10.860 Process bdevio pid: 108661 00:12:10.860 16:50:59 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:10.860 16:50:59 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:10.860 16:50:59 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 108661' 00:12:10.860 16:50:59 -- bdev/blockdev.sh@291 -- # waitforlisten 108661 00:12:10.860 16:50:59 -- common/autotest_common.sh@829 -- # '[' -z 108661 ']' 00:12:10.860 16:50:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.860 16:50:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:10.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.860 16:50:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.860 16:50:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:10.860 16:50:59 -- common/autotest_common.sh@10 -- # set +x 00:12:10.860 [2024-11-05 16:50:59.535165] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:10.860 [2024-11-05 16:50:59.535869] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108661 ] 00:12:10.860 [2024-11-05 16:50:59.717582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:11.117 [2024-11-05 16:50:59.898640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.117 [2024-11-05 16:50:59.898730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.117 [2024-11-05 16:50:59.898737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.375 [2024-11-05 16:51:00.256570] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:11.375 [2024-11-05 16:51:00.256700] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:11.375 [2024-11-05 16:51:00.264533] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:11.375 [2024-11-05 16:51:00.264643] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:11.633 [2024-11-05 16:51:00.272567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:11.633 [2024-11-05 16:51:00.272642] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:11.633 [2024-11-05 16:51:00.272682] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:11.633 [2024-11-05 16:51:00.474617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:11.633 [2024-11-05 16:51:00.474779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.633 [2024-11-05 16:51:00.474834] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:11.633 [2024-11-05 16:51:00.474857] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.633 [2024-11-05 16:51:00.477378] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.633 [2024-11-05 16:51:00.477445] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:12.568 16:51:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:12.568 16:51:01 -- common/autotest_common.sh@862 -- # return 0 00:12:12.568 16:51:01 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:12.568 I/O targets: 00:12:12.568 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:12:12.568 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:12:12.568 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:12:12.568 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:12:12.568 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:12:12.568 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:12:12.568 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:12:12.568 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:12:12.568 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:12:12.568 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:12:12.568 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:12:12.568 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:12:12.568 raid0: 131072 blocks of 512 bytes (64 MiB) 00:12:12.568 concat0: 131072 blocks of 512 bytes (64 MiB) 00:12:12.568 raid1: 65536 blocks of 512 bytes (32 MiB) 00:12:12.568 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:12:12.568 00:12:12.568 00:12:12.568 CUnit - A unit testing framework for C - Version 2.1-3 00:12:12.568 http://cunit.sourceforge.net/ 00:12:12.568 00:12:12.568 00:12:12.568 Suite: bdevio tests on: AIO0 00:12:12.568 Test: blockdev write read block ...passed 00:12:12.568 Test: blockdev write zeroes read block ...passed 00:12:12.568 Test: blockdev write zeroes read no split ...passed 00:12:12.568 Test: blockdev write zeroes read split ...passed 00:12:12.568 Test: blockdev write zeroes read split partial ...passed 00:12:12.568 Test: blockdev reset ...passed 00:12:12.568 Test: blockdev write read 8 blocks ...passed 00:12:12.568 Test: blockdev write read size > 128k ...passed 00:12:12.568 Test: blockdev write read invalid size ...passed 00:12:12.568 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.568 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.568 Test: blockdev write read max offset ...passed 00:12:12.568 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.568 Test: blockdev writev readv 8 blocks ...passed 00:12:12.568 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.569 Test: blockdev writev readv block ...passed 00:12:12.569 Test: blockdev writev readv size > 128k ...passed 00:12:12.569 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.569 Test: blockdev comparev and writev ...passed 00:12:12.569 Test: blockdev nvme passthru rw ...passed 00:12:12.569 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.569 Test: blockdev nvme admin passthru ...passed 00:12:12.569 Test: blockdev copy ...passed 00:12:12.569 Suite: bdevio tests on: raid1 00:12:12.569 Test: blockdev write read block ...passed 00:12:12.569 Test: blockdev write zeroes read block ...passed 00:12:12.569 Test: blockdev write zeroes read no split ...passed 00:12:12.569 Test: blockdev write zeroes read split ...passed 00:12:12.569 Test: blockdev write zeroes read split partial ...passed 00:12:12.569 Test: blockdev reset ...passed 00:12:12.569 Test: blockdev write read 8 blocks ...passed 00:12:12.569 Test: blockdev write read size > 128k ...passed 00:12:12.569 Test: blockdev write read invalid size ...passed 00:12:12.569 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.569 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.569 Test: blockdev write read max offset ...passed 00:12:12.569 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.569 Test: blockdev writev readv 8 blocks ...passed 00:12:12.569 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.569 Test: blockdev writev readv block ...passed 00:12:12.569 Test: blockdev writev readv size > 128k ...passed 00:12:12.569 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.569 Test: blockdev comparev and writev ...passed 00:12:12.569 Test: blockdev nvme passthru rw ...passed 00:12:12.569 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.569 Test: blockdev nvme admin passthru ...passed 00:12:12.569 Test: blockdev copy ...passed 00:12:12.569 Suite: bdevio tests on: concat0 00:12:12.569 Test: blockdev write read block ...passed 00:12:12.569 Test: blockdev write zeroes read block ...passed 00:12:12.569 Test: blockdev write zeroes read no split ...passed 00:12:12.569 Test: blockdev write zeroes read split ...passed 00:12:12.569 Test: blockdev write zeroes read split partial ...passed 00:12:12.569 Test: blockdev reset ...passed 00:12:12.569 Test: blockdev write read 8 blocks ...passed 00:12:12.569 Test: blockdev write read size > 128k ...passed 00:12:12.569 Test: blockdev write read invalid size ...passed 00:12:12.569 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.569 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.569 Test: blockdev write read max offset ...passed 00:12:12.569 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.569 Test: blockdev writev readv 8 blocks ...passed 00:12:12.569 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.569 Test: blockdev writev readv block ...passed 00:12:12.569 Test: blockdev writev readv size > 128k ...passed 00:12:12.569 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.569 Test: blockdev comparev and writev ...passed 00:12:12.569 Test: blockdev nvme passthru rw ...passed 00:12:12.569 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.569 Test: blockdev nvme admin passthru ...passed 00:12:12.569 Test: blockdev copy ...passed 00:12:12.569 Suite: bdevio tests on: raid0 00:12:12.569 Test: blockdev write read block ...passed 00:12:12.569 Test: blockdev write zeroes read block ...passed 00:12:12.569 Test: blockdev write zeroes read no split ...passed 00:12:12.829 Test: blockdev write zeroes read split ...passed 00:12:12.829 Test: blockdev write zeroes read split partial ...passed 00:12:12.829 Test: blockdev reset ...passed 00:12:12.829 Test: blockdev write read 8 blocks ...passed 00:12:12.829 Test: blockdev write read size > 128k ...passed 00:12:12.829 Test: blockdev write read invalid size ...passed 00:12:12.829 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.829 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.829 Test: blockdev write read max offset ...passed 00:12:12.829 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.829 Test: blockdev writev readv 8 blocks ...passed 00:12:12.829 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.829 Test: blockdev writev readv block ...passed 00:12:12.829 Test: blockdev writev readv size > 128k ...passed 00:12:12.829 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.829 Test: blockdev comparev and writev ...passed 00:12:12.829 Test: blockdev nvme passthru rw ...passed 00:12:12.829 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.829 Test: blockdev nvme admin passthru ...passed 00:12:12.829 Test: blockdev copy ...passed 00:12:12.829 Suite: bdevio tests on: TestPT 00:12:12.829 Test: blockdev write read block ...passed 00:12:12.829 Test: blockdev write zeroes read block ...passed 00:12:12.829 Test: blockdev write zeroes read no split ...passed 00:12:12.829 Test: blockdev write zeroes read split ...passed 00:12:12.829 Test: blockdev write zeroes read split partial ...passed 00:12:12.829 Test: blockdev reset ...passed 00:12:12.829 Test: blockdev write read 8 blocks ...passed 00:12:12.829 Test: blockdev write read size > 128k ...passed 00:12:12.829 Test: blockdev write read invalid size ...passed 00:12:12.829 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.829 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.829 Test: blockdev write read max offset ...passed 00:12:12.829 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.829 Test: blockdev writev readv 8 blocks ...passed 00:12:12.829 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.829 Test: blockdev writev readv block ...passed 00:12:12.829 Test: blockdev writev readv size > 128k ...passed 00:12:12.829 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.829 Test: blockdev comparev and writev ...passed 00:12:12.829 Test: blockdev nvme passthru rw ...passed 00:12:12.829 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.829 Test: blockdev nvme admin passthru ...passed 00:12:12.829 Test: blockdev copy ...passed 00:12:12.829 Suite: bdevio tests on: Malloc2p7 00:12:12.829 Test: blockdev write read block ...passed 00:12:12.829 Test: blockdev write zeroes read block ...passed 00:12:12.829 Test: blockdev write zeroes read no split ...passed 00:12:12.829 Test: blockdev write zeroes read split ...passed 00:12:12.829 Test: blockdev write zeroes read split partial ...passed 00:12:12.829 Test: blockdev reset ...passed 00:12:12.829 Test: blockdev write read 8 blocks ...passed 00:12:12.829 Test: blockdev write read size > 128k ...passed 00:12:12.829 Test: blockdev write read invalid size ...passed 00:12:12.829 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.829 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.829 Test: blockdev write read max offset ...passed 00:12:12.829 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.829 Test: blockdev writev readv 8 blocks ...passed 00:12:12.829 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.829 Test: blockdev writev readv block ...passed 00:12:12.829 Test: blockdev writev readv size > 128k ...passed 00:12:12.829 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.829 Test: blockdev comparev and writev ...passed 00:12:12.829 Test: blockdev nvme passthru rw ...passed 00:12:12.829 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.829 Test: blockdev nvme admin passthru ...passed 00:12:12.829 Test: blockdev copy ...passed 00:12:12.829 Suite: bdevio tests on: Malloc2p6 00:12:12.829 Test: blockdev write read block ...passed 00:12:12.829 Test: blockdev write zeroes read block ...passed 00:12:12.829 Test: blockdev write zeroes read no split ...passed 00:12:12.829 Test: blockdev write zeroes read split ...passed 00:12:12.829 Test: blockdev write zeroes read split partial ...passed 00:12:12.829 Test: blockdev reset ...passed 00:12:12.829 Test: blockdev write read 8 blocks ...passed 00:12:12.829 Test: blockdev write read size > 128k ...passed 00:12:12.829 Test: blockdev write read invalid size ...passed 00:12:12.829 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.829 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.829 Test: blockdev write read max offset ...passed 00:12:12.829 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.829 Test: blockdev writev readv 8 blocks ...passed 00:12:12.829 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.829 Test: blockdev writev readv block ...passed 00:12:12.829 Test: blockdev writev readv size > 128k ...passed 00:12:12.829 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.829 Test: blockdev comparev and writev ...passed 00:12:12.829 Test: blockdev nvme passthru rw ...passed 00:12:12.829 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.829 Test: blockdev nvme admin passthru ...passed 00:12:12.829 Test: blockdev copy ...passed 00:12:12.829 Suite: bdevio tests on: Malloc2p5 00:12:12.829 Test: blockdev write read block ...passed 00:12:12.829 Test: blockdev write zeroes read block ...passed 00:12:12.829 Test: blockdev write zeroes read no split ...passed 00:12:12.829 Test: blockdev write zeroes read split ...passed 00:12:13.088 Test: blockdev write zeroes read split partial ...passed 00:12:13.088 Test: blockdev reset ...passed 00:12:13.088 Test: blockdev write read 8 blocks ...passed 00:12:13.088 Test: blockdev write read size > 128k ...passed 00:12:13.088 Test: blockdev write read invalid size ...passed 00:12:13.088 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:13.088 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:13.088 Test: blockdev write read max offset ...passed 00:12:13.088 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:13.088 Test: blockdev writev readv 8 blocks ...passed 00:12:13.088 Test: blockdev writev readv 30 x 1block ...passed 00:12:13.088 Test: blockdev writev readv block ...passed 00:12:13.088 Test: blockdev writev readv size > 128k ...passed 00:12:13.088 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:13.088 Test: blockdev comparev and writev ...passed 00:12:13.088 Test: blockdev nvme passthru rw ...passed 00:12:13.088 Test: blockdev nvme passthru vendor specific ...passed 00:12:13.088 Test: blockdev nvme admin passthru ...passed 00:12:13.088 Test: blockdev copy ...passed 00:12:13.088 Suite: bdevio tests on: Malloc2p4 00:12:13.088 Test: blockdev write read block ...passed 00:12:13.088 Test: blockdev write zeroes read block ...passed 00:12:13.088 Test: blockdev write zeroes read no split ...passed 00:12:13.088 Test: blockdev write zeroes read split ...passed 00:12:13.088 Test: blockdev write zeroes read split partial ...passed 00:12:13.088 Test: blockdev reset ...passed 00:12:13.088 Test: blockdev write read 8 blocks ...passed 00:12:13.088 Test: blockdev write read size > 128k ...passed 00:12:13.088 Test: blockdev write read invalid size ...passed 00:12:13.088 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:13.088 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:13.088 Test: blockdev write read max offset ...passed 00:12:13.088 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:13.088 Test: blockdev writev readv 8 blocks ...passed 00:12:13.088 Test: blockdev writev readv 30 x 1block ...passed 00:12:13.088 Test: blockdev writev readv block ...passed 00:12:13.088 Test: blockdev writev readv size > 128k ...passed 00:12:13.088 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:13.088 Test: blockdev comparev and writev ...passed 00:12:13.088 Test: blockdev nvme passthru rw ...passed 00:12:13.088 Test: blockdev nvme passthru vendor specific ...passed 00:12:13.088 Test: blockdev nvme admin passthru ...passed 00:12:13.088 Test: blockdev copy ...passed 00:12:13.088 Suite: bdevio tests on: Malloc2p3 00:12:13.088 Test: blockdev write read block ...passed 00:12:13.088 Test: blockdev write zeroes read block ...passed 00:12:13.088 Test: blockdev write zeroes read no split ...passed 00:12:13.088 Test: blockdev write zeroes read split ...passed 00:12:13.088 Test: blockdev write zeroes read split partial ...passed 00:12:13.088 Test: blockdev reset ...passed 00:12:13.088 Test: blockdev write read 8 blocks ...passed 00:12:13.088 Test: blockdev write read size > 128k ...passed 00:12:13.088 Test: blockdev write read invalid size ...passed 00:12:13.088 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:13.088 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:13.088 Test: blockdev write read max offset ...passed 00:12:13.088 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:13.088 Test: blockdev writev readv 8 blocks ...passed 00:12:13.088 Test: blockdev writev readv 30 x 1block ...passed 00:12:13.088 Test: blockdev writev readv block ...passed 00:12:13.088 Test: blockdev writev readv size > 128k ...passed 00:12:13.088 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:13.088 Test: blockdev comparev and writev ...passed 00:12:13.088 Test: blockdev nvme passthru rw ...passed 00:12:13.088 Test: blockdev nvme passthru vendor specific ...passed 00:12:13.088 Test: blockdev nvme admin passthru ...passed 00:12:13.088 Test: blockdev copy ...passed 00:12:13.088 Suite: bdevio tests on: Malloc2p2 00:12:13.088 Test: blockdev write read block ...passed 00:12:13.088 Test: blockdev write zeroes read block ...passed 00:12:13.088 Test: blockdev write zeroes read no split ...passed 00:12:13.088 Test: blockdev write zeroes read split ...passed 00:12:13.088 Test: blockdev write zeroes read split partial ...passed 00:12:13.088 Test: blockdev reset ...passed 00:12:13.088 Test: blockdev write read 8 blocks ...passed 00:12:13.088 Test: blockdev write read size > 128k ...passed 00:12:13.088 Test: blockdev write read invalid size ...passed 00:12:13.088 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:13.088 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:13.088 Test: blockdev write read max offset ...passed 00:12:13.088 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:13.088 Test: blockdev writev readv 8 blocks ...passed 00:12:13.088 Test: blockdev writev readv 30 x 1block ...passed 00:12:13.088 Test: blockdev writev readv block ...passed 00:12:13.088 Test: blockdev writev readv size > 128k ...passed 00:12:13.088 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:13.088 Test: blockdev comparev and writev ...passed 00:12:13.088 Test: blockdev nvme passthru rw ...passed 00:12:13.088 Test: blockdev nvme passthru vendor specific ...passed 00:12:13.088 Test: blockdev nvme admin passthru ...passed 00:12:13.088 Test: blockdev copy ...passed 00:12:13.088 Suite: bdevio tests on: Malloc2p1 00:12:13.088 Test: blockdev write read block ...passed 00:12:13.088 Test: blockdev write zeroes read block ...passed 00:12:13.088 Test: blockdev write zeroes read no split ...passed 00:12:13.088 Test: blockdev write zeroes read split ...passed 00:12:13.088 Test: blockdev write zeroes read split partial ...passed 00:12:13.088 Test: blockdev reset ...passed 00:12:13.088 Test: blockdev write read 8 blocks ...passed 00:12:13.088 Test: blockdev write read size > 128k ...passed 00:12:13.088 Test: blockdev write read invalid size ...passed 00:12:13.088 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:13.088 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:13.088 Test: blockdev write read max offset ...passed 00:12:13.088 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:13.088 Test: blockdev writev readv 8 blocks ...passed 00:12:13.088 Test: blockdev writev readv 30 x 1block ...passed 00:12:13.088 Test: blockdev writev readv block ...passed 00:12:13.088 Test: blockdev writev readv size > 128k ...passed 00:12:13.088 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:13.088 Test: blockdev comparev and writev ...passed 00:12:13.088 Test: blockdev nvme passthru rw ...passed 00:12:13.088 Test: blockdev nvme passthru vendor specific ...passed 00:12:13.088 Test: blockdev nvme admin passthru ...passed 00:12:13.088 Test: blockdev copy ...passed 00:12:13.088 Suite: bdevio tests on: Malloc2p0 00:12:13.088 Test: blockdev write read block ...passed 00:12:13.089 Test: blockdev write zeroes read block ...passed 00:12:13.089 Test: blockdev write zeroes read no split ...passed 00:12:13.089 Test: blockdev write zeroes read split ...passed 00:12:13.089 Test: blockdev write zeroes read split partial ...passed 00:12:13.089 Test: blockdev reset ...passed 00:12:13.089 Test: blockdev write read 8 blocks ...passed 00:12:13.089 Test: blockdev write read size > 128k ...passed 00:12:13.089 Test: blockdev write read invalid size ...passed 00:12:13.089 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:13.089 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:13.089 Test: blockdev write read max offset ...passed 00:12:13.089 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:13.089 Test: blockdev writev readv 8 blocks ...passed 00:12:13.089 Test: blockdev writev readv 30 x 1block ...passed 00:12:13.089 Test: blockdev writev readv block ...passed 00:12:13.089 Test: blockdev writev readv size > 128k ...passed 00:12:13.089 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:13.089 Test: blockdev comparev and writev ...passed 00:12:13.089 Test: blockdev nvme passthru rw ...passed 00:12:13.089 Test: blockdev nvme passthru vendor specific ...passed 00:12:13.089 Test: blockdev nvme admin passthru ...passed 00:12:13.089 Test: blockdev copy ...passed 00:12:13.089 Suite: bdevio tests on: Malloc1p1 00:12:13.089 Test: blockdev write read block ...passed 00:12:13.089 Test: blockdev write zeroes read block ...passed 00:12:13.089 Test: blockdev write zeroes read no split ...passed 00:12:13.347 Test: blockdev write zeroes read split ...passed 00:12:13.347 Test: blockdev write zeroes read split partial ...passed 00:12:13.347 Test: blockdev reset ...passed 00:12:13.347 Test: blockdev write read 8 blocks ...passed 00:12:13.347 Test: blockdev write read size > 128k ...passed 00:12:13.347 Test: blockdev write read invalid size ...passed 00:12:13.347 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:13.347 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:13.347 Test: blockdev write read max offset ...passed 00:12:13.347 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:13.347 Test: blockdev writev readv 8 blocks ...passed 00:12:13.347 Test: blockdev writev readv 30 x 1block ...passed 00:12:13.347 Test: blockdev writev readv block ...passed 00:12:13.347 Test: blockdev writev readv size > 128k ...passed 00:12:13.347 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:13.347 Test: blockdev comparev and writev ...passed 00:12:13.347 Test: blockdev nvme passthru rw ...passed 00:12:13.347 Test: blockdev nvme passthru vendor specific ...passed 00:12:13.347 Test: blockdev nvme admin passthru ...passed 00:12:13.347 Test: blockdev copy ...passed 00:12:13.347 Suite: bdevio tests on: Malloc1p0 00:12:13.347 Test: blockdev write read block ...passed 00:12:13.347 Test: blockdev write zeroes read block ...passed 00:12:13.347 Test: blockdev write zeroes read no split ...passed 00:12:13.347 Test: blockdev write zeroes read split ...passed 00:12:13.347 Test: blockdev write zeroes read split partial ...passed 00:12:13.347 Test: blockdev reset ...passed 00:12:13.347 Test: blockdev write read 8 blocks ...passed 00:12:13.347 Test: blockdev write read size > 128k ...passed 00:12:13.347 Test: blockdev write read invalid size ...passed 00:12:13.347 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:13.347 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:13.347 Test: blockdev write read max offset ...passed 00:12:13.347 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:13.347 Test: blockdev writev readv 8 blocks ...passed 00:12:13.347 Test: blockdev writev readv 30 x 1block ...passed 00:12:13.347 Test: blockdev writev readv block ...passed 00:12:13.347 Test: blockdev writev readv size > 128k ...passed 00:12:13.347 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:13.347 Test: blockdev comparev and writev ...passed 00:12:13.347 Test: blockdev nvme passthru rw ...passed 00:12:13.347 Test: blockdev nvme passthru vendor specific ...passed 00:12:13.347 Test: blockdev nvme admin passthru ...passed 00:12:13.347 Test: blockdev copy ...passed 00:12:13.347 Suite: bdevio tests on: Malloc0 00:12:13.347 Test: blockdev write read block ...passed 00:12:13.347 Test: blockdev write zeroes read block ...passed 00:12:13.347 Test: blockdev write zeroes read no split ...passed 00:12:13.347 Test: blockdev write zeroes read split ...passed 00:12:13.347 Test: blockdev write zeroes read split partial ...passed 00:12:13.347 Test: blockdev reset ...passed 00:12:13.347 Test: blockdev write read 8 blocks ...passed 00:12:13.347 Test: blockdev write read size > 128k ...passed 00:12:13.347 Test: blockdev write read invalid size ...passed 00:12:13.348 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:13.348 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:13.348 Test: blockdev write read max offset ...passed 00:12:13.348 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:13.348 Test: blockdev writev readv 8 blocks ...passed 00:12:13.348 Test: blockdev writev readv 30 x 1block ...passed 00:12:13.348 Test: blockdev writev readv block ...passed 00:12:13.348 Test: blockdev writev readv size > 128k ...passed 00:12:13.348 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:13.348 Test: blockdev comparev and writev ...passed 00:12:13.348 Test: blockdev nvme passthru rw ...passed 00:12:13.348 Test: blockdev nvme passthru vendor specific ...passed 00:12:13.348 Test: blockdev nvme admin passthru ...passed 00:12:13.348 Test: blockdev copy ...passed 00:12:13.348 00:12:13.348 Run Summary: Type Total Ran Passed Failed Inactive 00:12:13.348 suites 16 16 n/a 0 0 00:12:13.348 tests 368 368 368 0 0 00:12:13.348 asserts 2224 2224 2224 0 n/a 00:12:13.348 00:12:13.348 Elapsed time = 2.341 seconds 00:12:13.348 0 00:12:13.348 16:51:02 -- bdev/blockdev.sh@293 -- # killprocess 108661 00:12:13.348 16:51:02 -- common/autotest_common.sh@936 -- # '[' -z 108661 ']' 00:12:13.348 16:51:02 -- common/autotest_common.sh@940 -- # kill -0 108661 00:12:13.348 16:51:02 -- common/autotest_common.sh@941 -- # uname 00:12:13.348 16:51:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:13.348 16:51:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108661 00:12:13.348 16:51:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:13.348 16:51:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:13.348 killing process with pid 108661 00:12:13.348 16:51:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 108661' 00:12:13.348 16:51:02 -- common/autotest_common.sh@955 -- # kill 108661 00:12:13.348 16:51:02 -- common/autotest_common.sh@960 -- # wait 108661 00:12:15.248 16:51:03 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:15.248 00:12:15.248 real 0m4.323s 00:12:15.248 user 0m11.209s 00:12:15.248 sys 0m0.567s 00:12:15.248 ************************************ 00:12:15.248 END TEST bdev_bounds 00:12:15.248 ************************************ 00:12:15.248 16:51:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:15.248 16:51:03 -- common/autotest_common.sh@10 -- # set +x 00:12:15.248 16:51:03 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:15.248 16:51:03 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:15.248 16:51:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:15.248 16:51:03 -- common/autotest_common.sh@10 -- # set +x 00:12:15.248 ************************************ 00:12:15.248 START TEST bdev_nbd 00:12:15.248 ************************************ 00:12:15.248 16:51:03 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:15.248 16:51:03 -- bdev/blockdev.sh@298 -- # uname -s 00:12:15.248 16:51:03 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:15.248 16:51:03 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:15.248 16:51:03 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:15.248 16:51:03 -- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:15.248 16:51:03 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:15.248 16:51:03 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:12:15.248 16:51:03 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:15.248 16:51:03 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:15.248 16:51:03 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:15.248 16:51:03 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:12:15.248 16:51:03 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:15.248 16:51:03 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:15.248 16:51:03 -- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:15.248 16:51:03 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:15.248 16:51:03 -- bdev/blockdev.sh@316 -- # nbd_pid=108752 00:12:15.248 16:51:03 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:15.248 16:51:03 -- bdev/blockdev.sh@318 -- # waitforlisten 108752 /var/tmp/spdk-nbd.sock 00:12:15.248 16:51:03 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:15.248 16:51:03 -- common/autotest_common.sh@829 -- # '[' -z 108752 ']' 00:12:15.248 16:51:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:15.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:15.249 16:51:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:15.249 16:51:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:15.249 16:51:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:15.249 16:51:03 -- common/autotest_common.sh@10 -- # set +x 00:12:15.249 [2024-11-05 16:51:03.915946] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:15.249 [2024-11-05 16:51:03.916137] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.249 [2024-11-05 16:51:04.083642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.508 [2024-11-05 16:51:04.264640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.766 [2024-11-05 16:51:04.593350] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:15.766 [2024-11-05 16:51:04.593487] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:15.766 [2024-11-05 16:51:04.601291] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:15.766 [2024-11-05 16:51:04.601383] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:15.766 [2024-11-05 16:51:04.609722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:15.766 [2024-11-05 16:51:04.609799] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:15.766 [2024-11-05 16:51:04.609838] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:16.024 [2024-11-05 16:51:04.790190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:16.024 [2024-11-05 16:51:04.790347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.024 [2024-11-05 16:51:04.790397] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:16.024 [2024-11-05 16:51:04.790427] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.024 [2024-11-05 16:51:04.792817] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.024 [2024-11-05 16:51:04.792892] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:16.959 16:51:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:16.959 16:51:05 -- common/autotest_common.sh@862 -- # return 0 00:12:16.959 16:51:05 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@24 -- # local i 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:16.959 16:51:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:16.959 16:51:05 -- common/autotest_common.sh@867 -- # local i 00:12:16.959 16:51:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:16.959 16:51:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:16.959 16:51:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:16.959 16:51:05 -- common/autotest_common.sh@871 -- # break 00:12:16.959 16:51:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:16.959 16:51:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:16.959 16:51:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.959 1+0 records in 00:12:16.959 1+0 records out 00:12:16.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037161 s, 11.0 MB/s 00:12:16.959 16:51:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.959 16:51:05 -- common/autotest_common.sh@884 -- # size=4096 00:12:16.959 16:51:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.959 16:51:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:16.959 16:51:05 -- common/autotest_common.sh@887 -- # return 0 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:16.959 16:51:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:12:17.224 16:51:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:17.224 16:51:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:17.224 16:51:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:17.224 16:51:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:17.224 16:51:05 -- common/autotest_common.sh@867 -- # local i 00:12:17.224 16:51:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:17.224 16:51:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:17.224 16:51:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:17.224 16:51:05 -- common/autotest_common.sh@871 -- # break 00:12:17.224 16:51:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:17.224 16:51:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:17.224 16:51:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.224 1+0 records in 00:12:17.224 1+0 records out 00:12:17.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282916 s, 14.5 MB/s 00:12:17.224 16:51:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.224 16:51:05 -- common/autotest_common.sh@884 -- # size=4096 00:12:17.224 16:51:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.224 16:51:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:17.224 16:51:06 -- common/autotest_common.sh@887 -- # return 0 00:12:17.224 16:51:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:17.224 16:51:06 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:17.224 16:51:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:12:17.494 16:51:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:17.494 16:51:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:17.494 16:51:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:17.494 16:51:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:17.494 16:51:06 -- common/autotest_common.sh@867 -- # local i 00:12:17.494 16:51:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:17.494 16:51:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:17.494 16:51:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:17.494 16:51:06 -- common/autotest_common.sh@871 -- # break 00:12:17.494 16:51:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:17.494 16:51:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:17.494 16:51:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.494 1+0 records in 00:12:17.494 1+0 records out 00:12:17.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379885 s, 10.8 MB/s 00:12:17.494 16:51:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.494 16:51:06 -- common/autotest_common.sh@884 -- # size=4096 00:12:17.494 16:51:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.494 16:51:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:17.494 16:51:06 -- common/autotest_common.sh@887 -- # return 0 00:12:17.494 16:51:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:17.494 16:51:06 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:17.494 16:51:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:12:17.752 16:51:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:17.752 16:51:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:17.752 16:51:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:17.752 16:51:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:17.752 16:51:06 -- common/autotest_common.sh@867 -- # local i 00:12:17.752 16:51:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:17.752 16:51:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:17.752 16:51:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:17.752 16:51:06 -- common/autotest_common.sh@871 -- # break 00:12:17.752 16:51:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:17.752 16:51:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:17.752 16:51:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.752 1+0 records in 00:12:17.752 1+0 records out 00:12:17.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281415 s, 14.6 MB/s 00:12:17.752 16:51:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.752 16:51:06 -- common/autotest_common.sh@884 -- # size=4096 00:12:17.752 16:51:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.752 16:51:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:17.752 16:51:06 -- common/autotest_common.sh@887 -- # return 0 00:12:17.752 16:51:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:17.752 16:51:06 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:17.752 16:51:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:12:18.010 16:51:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:18.010 16:51:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:18.010 16:51:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:18.010 16:51:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:18.010 16:51:06 -- common/autotest_common.sh@867 -- # local i 00:12:18.010 16:51:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:18.010 16:51:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:18.010 16:51:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:18.010 16:51:06 -- common/autotest_common.sh@871 -- # break 00:12:18.010 16:51:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:18.010 16:51:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:18.010 16:51:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.010 1+0 records in 00:12:18.010 1+0 records out 00:12:18.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422472 s, 9.7 MB/s 00:12:18.010 16:51:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.010 16:51:06 -- common/autotest_common.sh@884 -- # size=4096 00:12:18.010 16:51:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.010 16:51:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:18.010 16:51:06 -- common/autotest_common.sh@887 -- # return 0 00:12:18.010 16:51:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:18.010 16:51:06 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:18.010 16:51:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:12:18.268 16:51:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:18.268 16:51:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:18.268 16:51:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:18.268 16:51:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:18.268 16:51:06 -- common/autotest_common.sh@867 -- # local i 00:12:18.268 16:51:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:18.268 16:51:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:18.268 16:51:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:18.268 16:51:06 -- common/autotest_common.sh@871 -- # break 00:12:18.268 16:51:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:18.268 16:51:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:18.268 16:51:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.268 1+0 records in 00:12:18.268 1+0 records out 00:12:18.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483882 s, 8.5 MB/s 00:12:18.268 16:51:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.268 16:51:06 -- common/autotest_common.sh@884 -- # size=4096 00:12:18.268 16:51:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.268 16:51:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:18.268 16:51:06 -- common/autotest_common.sh@887 -- # return 0 00:12:18.268 16:51:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:18.268 16:51:06 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:18.268 16:51:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:12:18.526 16:51:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:18.526 16:51:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:18.526 16:51:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:18.526 16:51:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:12:18.526 16:51:07 -- common/autotest_common.sh@867 -- # local i 00:12:18.526 16:51:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:18.526 16:51:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:18.526 16:51:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:12:18.526 16:51:07 -- common/autotest_common.sh@871 -- # break 00:12:18.526 16:51:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:18.526 16:51:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:18.526 16:51:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.526 1+0 records in 00:12:18.526 1+0 records out 00:12:18.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666615 s, 6.1 MB/s 00:12:18.526 16:51:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.526 16:51:07 -- common/autotest_common.sh@884 -- # size=4096 00:12:18.526 16:51:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.526 16:51:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:18.526 16:51:07 -- common/autotest_common.sh@887 -- # return 0 00:12:18.526 16:51:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:18.526 16:51:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:18.526 16:51:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:12:18.785 16:51:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:12:18.785 16:51:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:12:18.785 16:51:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:12:18.785 16:51:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:12:18.785 16:51:07 -- common/autotest_common.sh@867 -- # local i 00:12:18.785 16:51:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:18.785 16:51:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:18.785 16:51:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:12:18.785 16:51:07 -- common/autotest_common.sh@871 -- # break 00:12:18.785 16:51:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:18.785 16:51:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:18.785 16:51:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.785 1+0 records in 00:12:18.785 1+0 records out 00:12:18.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000780168 s, 5.3 MB/s 00:12:18.785 16:51:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.785 16:51:07 -- common/autotest_common.sh@884 -- # size=4096 00:12:18.785 16:51:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.785 16:51:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:18.785 16:51:07 -- common/autotest_common.sh@887 -- # return 0 00:12:18.785 16:51:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:18.785 16:51:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:18.785 16:51:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:12:19.043 16:51:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:12:19.043 16:51:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:12:19.043 16:51:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:12:19.043 16:51:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:12:19.043 16:51:07 -- common/autotest_common.sh@867 -- # local i 00:12:19.043 16:51:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:19.043 16:51:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:19.043 16:51:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:12:19.043 16:51:07 -- common/autotest_common.sh@871 -- # break 00:12:19.043 16:51:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:19.043 16:51:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:19.043 16:51:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.043 1+0 records in 00:12:19.043 1+0 records out 00:12:19.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000787965 s, 5.2 MB/s 00:12:19.043 16:51:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.043 16:51:07 -- common/autotest_common.sh@884 -- # size=4096 00:12:19.043 16:51:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.043 16:51:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:19.043 16:51:07 -- common/autotest_common.sh@887 -- # return 0 00:12:19.043 16:51:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:19.043 16:51:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:19.043 16:51:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:12:19.301 16:51:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:12:19.301 16:51:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:12:19.301 16:51:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:12:19.301 16:51:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:12:19.301 16:51:08 -- common/autotest_common.sh@867 -- # local i 00:12:19.301 16:51:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:19.301 16:51:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:19.301 16:51:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:12:19.301 16:51:08 -- common/autotest_common.sh@871 -- # break 00:12:19.301 16:51:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:19.301 16:51:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:19.301 16:51:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.301 1+0 records in 00:12:19.301 1+0 records out 00:12:19.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000897297 s, 4.6 MB/s 00:12:19.301 16:51:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.301 16:51:08 -- common/autotest_common.sh@884 -- # size=4096 00:12:19.301 16:51:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.301 16:51:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:19.301 16:51:08 -- common/autotest_common.sh@887 -- # return 0 00:12:19.301 16:51:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:19.301 16:51:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:19.301 16:51:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:12:19.559 16:51:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:12:19.559 16:51:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:12:19.559 16:51:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:12:19.559 16:51:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:19.559 16:51:08 -- common/autotest_common.sh@867 -- # local i 00:12:19.559 16:51:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:19.559 16:51:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:19.559 16:51:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:19.559 16:51:08 -- common/autotest_common.sh@871 -- # break 00:12:19.559 16:51:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:19.559 16:51:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:19.559 16:51:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.559 1+0 records in 00:12:19.559 1+0 records out 00:12:19.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000892575 s, 4.6 MB/s 00:12:19.559 16:51:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.559 16:51:08 -- common/autotest_common.sh@884 -- # size=4096 00:12:19.559 16:51:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.559 16:51:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:19.559 16:51:08 -- common/autotest_common.sh@887 -- # return 0 00:12:19.559 16:51:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:19.559 16:51:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:19.559 16:51:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:12:19.817 16:51:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:12:19.817 16:51:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:12:19.817 16:51:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:12:19.817 16:51:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:19.817 16:51:08 -- common/autotest_common.sh@867 -- # local i 00:12:19.817 16:51:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:19.817 16:51:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:19.817 16:51:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:19.817 16:51:08 -- common/autotest_common.sh@871 -- # break 00:12:19.817 16:51:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:19.817 16:51:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:19.817 16:51:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.817 1+0 records in 00:12:19.817 1+0 records out 00:12:19.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663312 s, 6.2 MB/s 00:12:19.817 16:51:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.817 16:51:08 -- common/autotest_common.sh@884 -- # size=4096 00:12:19.817 16:51:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.817 16:51:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:19.817 16:51:08 -- common/autotest_common.sh@887 -- # return 0 00:12:19.817 16:51:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:19.817 16:51:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:19.817 16:51:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:12:20.075 16:51:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:12:20.075 16:51:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:12:20.075 16:51:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:12:20.075 16:51:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:20.075 16:51:08 -- common/autotest_common.sh@867 -- # local i 00:12:20.075 16:51:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:20.075 16:51:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:20.075 16:51:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:20.075 16:51:08 -- common/autotest_common.sh@871 -- # break 00:12:20.075 16:51:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:20.075 16:51:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:20.075 16:51:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:20.075 1+0 records in 00:12:20.075 1+0 records out 00:12:20.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000784668 s, 5.2 MB/s 00:12:20.075 16:51:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.075 16:51:08 -- common/autotest_common.sh@884 -- # size=4096 00:12:20.075 16:51:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.075 16:51:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:20.075 16:51:08 -- common/autotest_common.sh@887 -- # return 0 00:12:20.075 16:51:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:20.075 16:51:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:20.075 16:51:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:12:20.333 16:51:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:12:20.333 16:51:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:12:20.333 16:51:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:12:20.333 16:51:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:20.333 16:51:09 -- common/autotest_common.sh@867 -- # local i 00:12:20.333 16:51:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:20.333 16:51:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:20.334 16:51:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:20.334 16:51:09 -- common/autotest_common.sh@871 -- # break 00:12:20.334 16:51:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:20.334 16:51:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:20.334 16:51:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:20.334 1+0 records in 00:12:20.334 1+0 records out 00:12:20.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0007517 s, 5.4 MB/s 00:12:20.334 16:51:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.334 16:51:09 -- common/autotest_common.sh@884 -- # size=4096 00:12:20.334 16:51:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.334 16:51:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:20.334 16:51:09 -- common/autotest_common.sh@887 -- # return 0 00:12:20.334 16:51:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:20.334 16:51:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:20.334 16:51:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:12:20.591 16:51:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:12:20.591 16:51:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:12:20.591 16:51:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:12:20.591 16:51:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:12:20.591 16:51:09 -- common/autotest_common.sh@867 -- # local i 00:12:20.591 16:51:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:20.591 16:51:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:20.591 16:51:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:12:20.591 16:51:09 -- common/autotest_common.sh@871 -- # break 00:12:20.591 16:51:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:20.591 16:51:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:20.591 16:51:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:20.591 1+0 records in 00:12:20.591 1+0 records out 00:12:20.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000693685 s, 5.9 MB/s 00:12:20.592 16:51:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.592 16:51:09 -- common/autotest_common.sh@884 -- # size=4096 00:12:20.592 16:51:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.592 16:51:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:20.592 16:51:09 -- common/autotest_common.sh@887 -- # return 0 00:12:20.592 16:51:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:20.592 16:51:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:20.592 16:51:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:12:20.850 16:51:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:12:20.850 16:51:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:12:20.850 16:51:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:12:20.850 16:51:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:12:20.850 16:51:09 -- common/autotest_common.sh@867 -- # local i 00:12:20.850 16:51:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:20.850 16:51:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:20.850 16:51:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:12:21.108 16:51:09 -- common/autotest_common.sh@871 -- # break 00:12:21.108 16:51:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:21.108 16:51:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:21.108 16:51:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:21.108 1+0 records in 00:12:21.108 1+0 records out 00:12:21.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130864 s, 3.1 MB/s 00:12:21.108 16:51:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.108 16:51:09 -- common/autotest_common.sh@884 -- # size=4096 00:12:21.108 16:51:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.108 16:51:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:21.108 16:51:09 -- common/autotest_common.sh@887 -- # return 0 00:12:21.108 16:51:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:21.108 16:51:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:21.108 16:51:09 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:21.366 16:51:10 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd0", 00:12:21.366 "bdev_name": "Malloc0" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd1", 00:12:21.366 "bdev_name": "Malloc1p0" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd2", 00:12:21.366 "bdev_name": "Malloc1p1" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd3", 00:12:21.366 "bdev_name": "Malloc2p0" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd4", 00:12:21.366 "bdev_name": "Malloc2p1" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd5", 00:12:21.366 "bdev_name": "Malloc2p2" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd6", 00:12:21.366 "bdev_name": "Malloc2p3" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd7", 00:12:21.366 "bdev_name": "Malloc2p4" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd8", 00:12:21.366 "bdev_name": "Malloc2p5" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd9", 00:12:21.366 "bdev_name": "Malloc2p6" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd10", 00:12:21.366 "bdev_name": "Malloc2p7" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd11", 00:12:21.366 "bdev_name": "TestPT" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd12", 00:12:21.366 "bdev_name": "raid0" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd13", 00:12:21.366 "bdev_name": "concat0" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd14", 00:12:21.366 "bdev_name": "raid1" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd15", 00:12:21.366 "bdev_name": "AIO0" 00:12:21.366 } 00:12:21.366 ]' 00:12:21.366 16:51:10 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:21.366 16:51:10 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd0", 00:12:21.366 "bdev_name": "Malloc0" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd1", 00:12:21.366 "bdev_name": "Malloc1p0" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd2", 00:12:21.366 "bdev_name": "Malloc1p1" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd3", 00:12:21.366 "bdev_name": "Malloc2p0" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd4", 00:12:21.366 "bdev_name": "Malloc2p1" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd5", 00:12:21.366 "bdev_name": "Malloc2p2" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd6", 00:12:21.366 "bdev_name": "Malloc2p3" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd7", 00:12:21.366 "bdev_name": "Malloc2p4" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd8", 00:12:21.366 "bdev_name": "Malloc2p5" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd9", 00:12:21.366 "bdev_name": "Malloc2p6" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd10", 00:12:21.366 "bdev_name": "Malloc2p7" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd11", 00:12:21.366 "bdev_name": "TestPT" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd12", 00:12:21.366 "bdev_name": "raid0" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd13", 00:12:21.366 "bdev_name": "concat0" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd14", 00:12:21.366 "bdev_name": "raid1" 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "nbd_device": "/dev/nbd15", 00:12:21.366 "bdev_name": "AIO0" 00:12:21.366 } 00:12:21.366 ]' 00:12:21.366 16:51:10 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:21.366 16:51:10 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:21.366 16:51:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:21.366 16:51:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:12:21.366 16:51:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:21.366 16:51:10 -- bdev/nbd_common.sh@51 -- # local i 00:12:21.366 16:51:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.366 16:51:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:21.626 16:51:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:21.626 16:51:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:21.626 16:51:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:21.626 16:51:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.626 16:51:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.626 16:51:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:21.626 16:51:10 -- bdev/nbd_common.sh@41 -- # break 00:12:21.626 16:51:10 -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.626 16:51:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.626 16:51:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:21.885 16:51:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:21.885 16:51:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:21.885 16:51:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:21.885 16:51:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.885 16:51:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.885 16:51:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:21.885 16:51:10 -- bdev/nbd_common.sh@41 -- # break 00:12:21.885 16:51:10 -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.885 16:51:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.885 16:51:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:22.143 16:51:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:22.143 16:51:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:22.143 16:51:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:22.143 16:51:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.143 16:51:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.143 16:51:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:22.143 16:51:10 -- bdev/nbd_common.sh@41 -- # break 00:12:22.143 16:51:10 -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.143 16:51:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.143 16:51:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@41 -- # break 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@41 -- # break 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.402 16:51:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:22.659 16:51:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:22.659 16:51:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:22.659 16:51:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:22.659 16:51:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.659 16:51:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.659 16:51:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:22.659 16:51:11 -- bdev/nbd_common.sh@41 -- # break 00:12:22.659 16:51:11 -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.659 16:51:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.659 16:51:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:22.918 16:51:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:22.918 16:51:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:22.918 16:51:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:22.918 16:51:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.918 16:51:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.918 16:51:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:22.918 16:51:11 -- bdev/nbd_common.sh@41 -- # break 00:12:22.918 16:51:11 -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.918 16:51:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.918 16:51:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:23.176 16:51:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:23.176 16:51:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:23.176 16:51:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:23.176 16:51:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.176 16:51:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.176 16:51:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:23.177 16:51:11 -- bdev/nbd_common.sh@41 -- # break 00:12:23.177 16:51:11 -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.177 16:51:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.177 16:51:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:23.435 16:51:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:23.435 16:51:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:23.435 16:51:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:23.435 16:51:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.435 16:51:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.435 16:51:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:23.435 16:51:12 -- bdev/nbd_common.sh@41 -- # break 00:12:23.435 16:51:12 -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.435 16:51:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.435 16:51:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:23.695 16:51:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:23.695 16:51:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:23.695 16:51:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:23.695 16:51:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.695 16:51:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.695 16:51:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:23.695 16:51:12 -- bdev/nbd_common.sh@41 -- # break 00:12:23.695 16:51:12 -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.695 16:51:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.695 16:51:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:23.954 16:51:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:23.954 16:51:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:23.954 16:51:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:23.954 16:51:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.954 16:51:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.954 16:51:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:23.954 16:51:12 -- bdev/nbd_common.sh@41 -- # break 00:12:23.954 16:51:12 -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.954 16:51:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.954 16:51:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:24.213 16:51:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:24.213 16:51:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:24.213 16:51:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:24.213 16:51:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.213 16:51:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.213 16:51:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:24.213 16:51:12 -- bdev/nbd_common.sh@41 -- # break 00:12:24.213 16:51:12 -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.213 16:51:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.213 16:51:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:24.213 16:51:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:24.213 16:51:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:24.213 16:51:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:24.213 16:51:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.213 16:51:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.213 16:51:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:24.213 16:51:13 -- bdev/nbd_common.sh@41 -- # break 00:12:24.213 16:51:13 -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.213 16:51:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.213 16:51:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:24.473 16:51:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:24.473 16:51:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:24.473 16:51:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:24.473 16:51:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.473 16:51:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.473 16:51:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:24.473 16:51:13 -- bdev/nbd_common.sh@41 -- # break 00:12:24.473 16:51:13 -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.473 16:51:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.473 16:51:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:24.735 16:51:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:24.735 16:51:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:24.735 16:51:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:24.735 16:51:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.735 16:51:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.735 16:51:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:24.735 16:51:13 -- bdev/nbd_common.sh@41 -- # break 00:12:24.735 16:51:13 -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.735 16:51:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.735 16:51:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:25.000 16:51:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:25.000 16:51:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:25.000 16:51:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:25.000 16:51:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:25.000 16:51:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:25.000 16:51:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:25.000 16:51:13 -- bdev/nbd_common.sh@41 -- # break 00:12:25.000 16:51:13 -- bdev/nbd_common.sh@45 -- # return 0 00:12:25.001 16:51:13 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:25.001 16:51:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:25.001 16:51:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@65 -- # true 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@65 -- # count=0 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@122 -- # count=0 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@127 -- # return 0 00:12:25.259 16:51:14 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@12 -- # local i 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:25.259 16:51:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:25.517 /dev/nbd0 00:12:25.517 16:51:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:25.517 16:51:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:25.517 16:51:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:25.517 16:51:14 -- common/autotest_common.sh@867 -- # local i 00:12:25.517 16:51:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:25.517 16:51:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:25.517 16:51:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:25.517 16:51:14 -- common/autotest_common.sh@871 -- # break 00:12:25.517 16:51:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:25.517 16:51:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:25.517 16:51:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.517 1+0 records in 00:12:25.517 1+0 records out 00:12:25.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611836 s, 6.7 MB/s 00:12:25.517 16:51:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.517 16:51:14 -- common/autotest_common.sh@884 -- # size=4096 00:12:25.517 16:51:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.517 16:51:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:25.517 16:51:14 -- common/autotest_common.sh@887 -- # return 0 00:12:25.517 16:51:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.517 16:51:14 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:25.517 16:51:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:25.776 /dev/nbd1 00:12:25.776 16:51:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:25.776 16:51:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:25.776 16:51:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:25.776 16:51:14 -- common/autotest_common.sh@867 -- # local i 00:12:25.776 16:51:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:25.776 16:51:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:25.776 16:51:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:25.776 16:51:14 -- common/autotest_common.sh@871 -- # break 00:12:25.776 16:51:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:25.776 16:51:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:25.776 16:51:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.776 1+0 records in 00:12:25.776 1+0 records out 00:12:25.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347073 s, 11.8 MB/s 00:12:25.776 16:51:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.776 16:51:14 -- common/autotest_common.sh@884 -- # size=4096 00:12:25.776 16:51:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.776 16:51:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:25.776 16:51:14 -- common/autotest_common.sh@887 -- # return 0 00:12:25.776 16:51:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.776 16:51:14 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:25.776 16:51:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:26.034 /dev/nbd10 00:12:26.292 16:51:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:26.292 16:51:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:26.293 16:51:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:26.293 16:51:14 -- common/autotest_common.sh@867 -- # local i 00:12:26.293 16:51:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:26.293 16:51:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:26.293 16:51:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:26.293 16:51:14 -- common/autotest_common.sh@871 -- # break 00:12:26.293 16:51:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:26.293 16:51:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:26.293 16:51:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.293 1+0 records in 00:12:26.293 1+0 records out 00:12:26.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043055 s, 9.5 MB/s 00:12:26.293 16:51:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.293 16:51:14 -- common/autotest_common.sh@884 -- # size=4096 00:12:26.293 16:51:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.293 16:51:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:26.293 16:51:14 -- common/autotest_common.sh@887 -- # return 0 00:12:26.293 16:51:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.293 16:51:14 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:26.293 16:51:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:26.551 /dev/nbd11 00:12:26.552 16:51:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:26.552 16:51:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:26.552 16:51:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:26.552 16:51:15 -- common/autotest_common.sh@867 -- # local i 00:12:26.552 16:51:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:26.552 16:51:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:26.552 16:51:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:26.552 16:51:15 -- common/autotest_common.sh@871 -- # break 00:12:26.552 16:51:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:26.552 16:51:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:26.552 16:51:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.552 1+0 records in 00:12:26.552 1+0 records out 00:12:26.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041486 s, 9.9 MB/s 00:12:26.552 16:51:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.552 16:51:15 -- common/autotest_common.sh@884 -- # size=4096 00:12:26.552 16:51:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.552 16:51:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:26.552 16:51:15 -- common/autotest_common.sh@887 -- # return 0 00:12:26.552 16:51:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.552 16:51:15 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:26.552 16:51:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:26.810 /dev/nbd12 00:12:26.810 16:51:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:26.810 16:51:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:26.810 16:51:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:26.810 16:51:15 -- common/autotest_common.sh@867 -- # local i 00:12:26.810 16:51:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:26.810 16:51:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:26.810 16:51:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:26.810 16:51:15 -- common/autotest_common.sh@871 -- # break 00:12:26.810 16:51:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:26.810 16:51:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:26.810 16:51:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.810 1+0 records in 00:12:26.810 1+0 records out 00:12:26.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406606 s, 10.1 MB/s 00:12:26.810 16:51:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.810 16:51:15 -- common/autotest_common.sh@884 -- # size=4096 00:12:26.810 16:51:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.810 16:51:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:26.810 16:51:15 -- common/autotest_common.sh@887 -- # return 0 00:12:26.810 16:51:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.810 16:51:15 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:26.810 16:51:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:27.068 /dev/nbd13 00:12:27.068 16:51:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:27.068 16:51:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:27.068 16:51:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:27.068 16:51:15 -- common/autotest_common.sh@867 -- # local i 00:12:27.068 16:51:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:27.068 16:51:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:27.068 16:51:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:27.068 16:51:15 -- common/autotest_common.sh@871 -- # break 00:12:27.068 16:51:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:27.068 16:51:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:27.068 16:51:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.068 1+0 records in 00:12:27.068 1+0 records out 00:12:27.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459196 s, 8.9 MB/s 00:12:27.068 16:51:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.068 16:51:15 -- common/autotest_common.sh@884 -- # size=4096 00:12:27.068 16:51:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.068 16:51:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:27.068 16:51:15 -- common/autotest_common.sh@887 -- # return 0 00:12:27.068 16:51:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:27.069 16:51:15 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:27.069 16:51:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:27.328 /dev/nbd14 00:12:27.328 16:51:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:27.328 16:51:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:27.328 16:51:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:12:27.328 16:51:16 -- common/autotest_common.sh@867 -- # local i 00:12:27.328 16:51:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:27.328 16:51:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:27.328 16:51:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:12:27.328 16:51:16 -- common/autotest_common.sh@871 -- # break 00:12:27.328 16:51:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:27.328 16:51:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:27.328 16:51:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.328 1+0 records in 00:12:27.328 1+0 records out 00:12:27.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412715 s, 9.9 MB/s 00:12:27.328 16:51:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.328 16:51:16 -- common/autotest_common.sh@884 -- # size=4096 00:12:27.328 16:51:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.328 16:51:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:27.328 16:51:16 -- common/autotest_common.sh@887 -- # return 0 00:12:27.328 16:51:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:27.328 16:51:16 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:27.328 16:51:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:27.587 /dev/nbd15 00:12:27.587 16:51:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:27.587 16:51:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:27.587 16:51:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:12:27.587 16:51:16 -- common/autotest_common.sh@867 -- # local i 00:12:27.587 16:51:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:27.587 16:51:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:27.587 16:51:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:12:27.587 16:51:16 -- common/autotest_common.sh@871 -- # break 00:12:27.587 16:51:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:27.587 16:51:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:27.587 16:51:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.587 1+0 records in 00:12:27.587 1+0 records out 00:12:27.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453844 s, 9.0 MB/s 00:12:27.587 16:51:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.587 16:51:16 -- common/autotest_common.sh@884 -- # size=4096 00:12:27.587 16:51:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.587 16:51:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:27.587 16:51:16 -- common/autotest_common.sh@887 -- # return 0 00:12:27.587 16:51:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:27.587 16:51:16 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:27.587 16:51:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:27.846 /dev/nbd2 00:12:27.846 16:51:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:27.846 16:51:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:27.846 16:51:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:27.846 16:51:16 -- common/autotest_common.sh@867 -- # local i 00:12:27.846 16:51:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:27.846 16:51:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:27.846 16:51:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:27.846 16:51:16 -- common/autotest_common.sh@871 -- # break 00:12:27.846 16:51:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:27.846 16:51:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:27.846 16:51:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.846 1+0 records in 00:12:27.846 1+0 records out 00:12:27.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535253 s, 7.7 MB/s 00:12:27.846 16:51:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.846 16:51:16 -- common/autotest_common.sh@884 -- # size=4096 00:12:27.846 16:51:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.846 16:51:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:27.846 16:51:16 -- common/autotest_common.sh@887 -- # return 0 00:12:27.846 16:51:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:27.846 16:51:16 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:27.846 16:51:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:28.104 /dev/nbd3 00:12:28.104 16:51:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:28.104 16:51:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:28.104 16:51:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:28.104 16:51:16 -- common/autotest_common.sh@867 -- # local i 00:12:28.104 16:51:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:28.104 16:51:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:28.104 16:51:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:28.104 16:51:16 -- common/autotest_common.sh@871 -- # break 00:12:28.104 16:51:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:28.104 16:51:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:28.104 16:51:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.104 1+0 records in 00:12:28.104 1+0 records out 00:12:28.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635406 s, 6.4 MB/s 00:12:28.104 16:51:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.104 16:51:16 -- common/autotest_common.sh@884 -- # size=4096 00:12:28.104 16:51:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.104 16:51:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:28.104 16:51:16 -- common/autotest_common.sh@887 -- # return 0 00:12:28.104 16:51:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.104 16:51:16 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:28.104 16:51:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:28.363 /dev/nbd4 00:12:28.363 16:51:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:28.363 16:51:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:28.363 16:51:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:28.363 16:51:17 -- common/autotest_common.sh@867 -- # local i 00:12:28.363 16:51:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:28.363 16:51:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:28.363 16:51:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:28.363 16:51:17 -- common/autotest_common.sh@871 -- # break 00:12:28.363 16:51:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:28.363 16:51:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:28.363 16:51:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.363 1+0 records in 00:12:28.363 1+0 records out 00:12:28.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000808281 s, 5.1 MB/s 00:12:28.363 16:51:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.363 16:51:17 -- common/autotest_common.sh@884 -- # size=4096 00:12:28.363 16:51:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.363 16:51:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:28.363 16:51:17 -- common/autotest_common.sh@887 -- # return 0 00:12:28.363 16:51:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.363 16:51:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:28.363 16:51:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:28.621 /dev/nbd5 00:12:28.621 16:51:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:28.621 16:51:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:28.621 16:51:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:28.621 16:51:17 -- common/autotest_common.sh@867 -- # local i 00:12:28.621 16:51:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:28.621 16:51:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:28.621 16:51:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:28.621 16:51:17 -- common/autotest_common.sh@871 -- # break 00:12:28.622 16:51:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:28.622 16:51:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:28.622 16:51:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.622 1+0 records in 00:12:28.622 1+0 records out 00:12:28.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000639374 s, 6.4 MB/s 00:12:28.622 16:51:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.622 16:51:17 -- common/autotest_common.sh@884 -- # size=4096 00:12:28.622 16:51:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.622 16:51:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:28.622 16:51:17 -- common/autotest_common.sh@887 -- # return 0 00:12:28.622 16:51:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.622 16:51:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:28.622 16:51:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:28.880 /dev/nbd6 00:12:28.880 16:51:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:28.880 16:51:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:28.880 16:51:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:12:28.880 16:51:17 -- common/autotest_common.sh@867 -- # local i 00:12:28.880 16:51:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:28.880 16:51:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:28.880 16:51:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:12:28.880 16:51:17 -- common/autotest_common.sh@871 -- # break 00:12:28.880 16:51:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:28.880 16:51:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:28.880 16:51:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.880 1+0 records in 00:12:28.880 1+0 records out 00:12:28.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048634 s, 8.4 MB/s 00:12:28.880 16:51:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.880 16:51:17 -- common/autotest_common.sh@884 -- # size=4096 00:12:28.880 16:51:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.880 16:51:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:28.880 16:51:17 -- common/autotest_common.sh@887 -- # return 0 00:12:28.880 16:51:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.880 16:51:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:28.880 16:51:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:12:29.138 /dev/nbd7 00:12:29.138 16:51:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:12:29.138 16:51:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:12:29.138 16:51:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:12:29.138 16:51:17 -- common/autotest_common.sh@867 -- # local i 00:12:29.138 16:51:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:29.138 16:51:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:29.138 16:51:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:12:29.138 16:51:17 -- common/autotest_common.sh@871 -- # break 00:12:29.138 16:51:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:29.138 16:51:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:29.138 16:51:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.138 1+0 records in 00:12:29.138 1+0 records out 00:12:29.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000898423 s, 4.6 MB/s 00:12:29.138 16:51:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.138 16:51:17 -- common/autotest_common.sh@884 -- # size=4096 00:12:29.138 16:51:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.138 16:51:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:29.138 16:51:17 -- common/autotest_common.sh@887 -- # return 0 00:12:29.138 16:51:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.138 16:51:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:29.138 16:51:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:12:29.395 /dev/nbd8 00:12:29.395 16:51:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:12:29.395 16:51:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:12:29.395 16:51:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:12:29.395 16:51:18 -- common/autotest_common.sh@867 -- # local i 00:12:29.395 16:51:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:29.395 16:51:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:29.395 16:51:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:12:29.395 16:51:18 -- common/autotest_common.sh@871 -- # break 00:12:29.395 16:51:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:29.395 16:51:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:29.396 16:51:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.396 1+0 records in 00:12:29.396 1+0 records out 00:12:29.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047412 s, 8.6 MB/s 00:12:29.396 16:51:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.396 16:51:18 -- common/autotest_common.sh@884 -- # size=4096 00:12:29.396 16:51:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.396 16:51:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:29.396 16:51:18 -- common/autotest_common.sh@887 -- # return 0 00:12:29.396 16:51:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.396 16:51:18 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:29.396 16:51:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:12:29.653 /dev/nbd9 00:12:29.653 16:51:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:12:29.653 16:51:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:12:29.653 16:51:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:12:29.653 16:51:18 -- common/autotest_common.sh@867 -- # local i 00:12:29.653 16:51:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:29.653 16:51:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:29.653 16:51:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:12:29.653 16:51:18 -- common/autotest_common.sh@871 -- # break 00:12:29.653 16:51:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:29.653 16:51:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:29.653 16:51:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.653 1+0 records in 00:12:29.653 1+0 records out 00:12:29.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101677 s, 4.0 MB/s 00:12:29.653 16:51:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.653 16:51:18 -- common/autotest_common.sh@884 -- # size=4096 00:12:29.653 16:51:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.653 16:51:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:29.653 16:51:18 -- common/autotest_common.sh@887 -- # return 0 00:12:29.653 16:51:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.653 16:51:18 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:29.653 16:51:18 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:29.653 16:51:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:29.653 16:51:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd0", 00:12:29.912 "bdev_name": "Malloc0" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd1", 00:12:29.912 "bdev_name": "Malloc1p0" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd10", 00:12:29.912 "bdev_name": "Malloc1p1" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd11", 00:12:29.912 "bdev_name": "Malloc2p0" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd12", 00:12:29.912 "bdev_name": "Malloc2p1" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd13", 00:12:29.912 "bdev_name": "Malloc2p2" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd14", 00:12:29.912 "bdev_name": "Malloc2p3" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd15", 00:12:29.912 "bdev_name": "Malloc2p4" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd2", 00:12:29.912 "bdev_name": "Malloc2p5" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd3", 00:12:29.912 "bdev_name": "Malloc2p6" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd4", 00:12:29.912 "bdev_name": "Malloc2p7" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd5", 00:12:29.912 "bdev_name": "TestPT" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd6", 00:12:29.912 "bdev_name": "raid0" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd7", 00:12:29.912 "bdev_name": "concat0" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd8", 00:12:29.912 "bdev_name": "raid1" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd9", 00:12:29.912 "bdev_name": "AIO0" 00:12:29.912 } 00:12:29.912 ]' 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd0", 00:12:29.912 "bdev_name": "Malloc0" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd1", 00:12:29.912 "bdev_name": "Malloc1p0" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd10", 00:12:29.912 "bdev_name": "Malloc1p1" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd11", 00:12:29.912 "bdev_name": "Malloc2p0" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd12", 00:12:29.912 "bdev_name": "Malloc2p1" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd13", 00:12:29.912 "bdev_name": "Malloc2p2" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd14", 00:12:29.912 "bdev_name": "Malloc2p3" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd15", 00:12:29.912 "bdev_name": "Malloc2p4" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd2", 00:12:29.912 "bdev_name": "Malloc2p5" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd3", 00:12:29.912 "bdev_name": "Malloc2p6" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd4", 00:12:29.912 "bdev_name": "Malloc2p7" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd5", 00:12:29.912 "bdev_name": "TestPT" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd6", 00:12:29.912 "bdev_name": "raid0" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd7", 00:12:29.912 "bdev_name": "concat0" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd8", 00:12:29.912 "bdev_name": "raid1" 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nbd_device": "/dev/nbd9", 00:12:29.912 "bdev_name": "AIO0" 00:12:29.912 } 00:12:29.912 ]' 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:29.912 /dev/nbd1 00:12:29.912 /dev/nbd10 00:12:29.912 /dev/nbd11 00:12:29.912 /dev/nbd12 00:12:29.912 /dev/nbd13 00:12:29.912 /dev/nbd14 00:12:29.912 /dev/nbd15 00:12:29.912 /dev/nbd2 00:12:29.912 /dev/nbd3 00:12:29.912 /dev/nbd4 00:12:29.912 /dev/nbd5 00:12:29.912 /dev/nbd6 00:12:29.912 /dev/nbd7 00:12:29.912 /dev/nbd8 00:12:29.912 /dev/nbd9' 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:29.912 /dev/nbd1 00:12:29.912 /dev/nbd10 00:12:29.912 /dev/nbd11 00:12:29.912 /dev/nbd12 00:12:29.912 /dev/nbd13 00:12:29.912 /dev/nbd14 00:12:29.912 /dev/nbd15 00:12:29.912 /dev/nbd2 00:12:29.912 /dev/nbd3 00:12:29.912 /dev/nbd4 00:12:29.912 /dev/nbd5 00:12:29.912 /dev/nbd6 00:12:29.912 /dev/nbd7 00:12:29.912 /dev/nbd8 00:12:29.912 /dev/nbd9' 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@65 -- # count=16 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@66 -- # echo 16 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@95 -- # count=16 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:29.912 256+0 records in 00:12:29.912 256+0 records out 00:12:29.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00425302 s, 247 MB/s 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:29.912 256+0 records in 00:12:29.912 256+0 records out 00:12:29.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133245 s, 7.9 MB/s 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.912 16:51:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:30.171 256+0 records in 00:12:30.171 256+0 records out 00:12:30.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139046 s, 7.5 MB/s 00:12:30.171 16:51:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.171 16:51:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:30.436 256+0 records in 00:12:30.436 256+0 records out 00:12:30.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129564 s, 8.1 MB/s 00:12:30.436 16:51:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.436 16:51:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:30.436 256+0 records in 00:12:30.436 256+0 records out 00:12:30.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143361 s, 7.3 MB/s 00:12:30.436 16:51:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.436 16:51:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:30.707 256+0 records in 00:12:30.707 256+0 records out 00:12:30.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13824 s, 7.6 MB/s 00:12:30.707 16:51:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.707 16:51:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:30.707 256+0 records in 00:12:30.707 256+0 records out 00:12:30.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138776 s, 7.6 MB/s 00:12:30.707 16:51:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.707 16:51:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:30.965 256+0 records in 00:12:30.965 256+0 records out 00:12:30.965 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142665 s, 7.3 MB/s 00:12:30.965 16:51:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.965 16:51:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:12:30.965 256+0 records in 00:12:30.965 256+0 records out 00:12:30.965 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140723 s, 7.5 MB/s 00:12:30.965 16:51:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.965 16:51:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:12:31.223 256+0 records in 00:12:31.223 256+0 records out 00:12:31.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138896 s, 7.5 MB/s 00:12:31.223 16:51:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:31.223 16:51:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:12:31.223 256+0 records in 00:12:31.223 256+0 records out 00:12:31.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171137 s, 6.1 MB/s 00:12:31.223 16:51:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:31.223 16:51:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:31.482 256+0 records in 00:12:31.482 256+0 records out 00:12:31.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140015 s, 7.5 MB/s 00:12:31.482 16:51:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:31.482 16:51:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:31.740 256+0 records in 00:12:31.740 256+0 records out 00:12:31.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139128 s, 7.5 MB/s 00:12:31.740 16:51:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:31.740 16:51:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:31.740 256+0 records in 00:12:31.740 256+0 records out 00:12:31.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139395 s, 7.5 MB/s 00:12:31.740 16:51:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:31.740 16:51:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:31.998 256+0 records in 00:12:31.998 256+0 records out 00:12:31.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147496 s, 7.1 MB/s 00:12:31.998 16:51:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:31.998 16:51:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:31.998 256+0 records in 00:12:31.998 256+0 records out 00:12:31.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142984 s, 7.3 MB/s 00:12:31.998 16:51:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:31.998 16:51:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:32.256 256+0 records in 00:12:32.256 256+0 records out 00:12:32.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224774 s, 4.7 MB/s 00:12:32.256 16:51:21 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.257 16:51:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@51 -- # local i 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.515 16:51:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:32.774 16:51:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:32.774 16:51:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:32.774 16:51:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:32.774 16:51:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.774 16:51:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.774 16:51:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:32.774 16:51:21 -- bdev/nbd_common.sh@41 -- # break 00:12:32.774 16:51:21 -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.774 16:51:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.774 16:51:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:33.033 16:51:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:33.033 16:51:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:33.033 16:51:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:33.033 16:51:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.033 16:51:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.033 16:51:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:33.033 16:51:21 -- bdev/nbd_common.sh@41 -- # break 00:12:33.033 16:51:21 -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.033 16:51:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.033 16:51:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:33.290 16:51:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:33.290 16:51:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:33.290 16:51:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:33.290 16:51:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.290 16:51:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.290 16:51:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:33.290 16:51:22 -- bdev/nbd_common.sh@41 -- # break 00:12:33.290 16:51:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.290 16:51:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.290 16:51:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:33.548 16:51:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:33.548 16:51:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:33.548 16:51:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:33.548 16:51:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.548 16:51:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.548 16:51:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:33.548 16:51:22 -- bdev/nbd_common.sh@41 -- # break 00:12:33.548 16:51:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.548 16:51:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.548 16:51:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:33.806 16:51:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:33.806 16:51:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:33.806 16:51:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:33.806 16:51:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.806 16:51:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.806 16:51:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:33.806 16:51:22 -- bdev/nbd_common.sh@41 -- # break 00:12:33.806 16:51:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.806 16:51:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.806 16:51:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@41 -- # break 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@41 -- # break 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.064 16:51:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:34.322 16:51:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:34.322 16:51:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:34.322 16:51:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:34.322 16:51:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.322 16:51:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.322 16:51:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:34.322 16:51:23 -- bdev/nbd_common.sh@41 -- # break 00:12:34.322 16:51:23 -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.322 16:51:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.322 16:51:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:34.580 16:51:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:34.580 16:51:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:34.580 16:51:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:34.580 16:51:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.580 16:51:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.580 16:51:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:34.580 16:51:23 -- bdev/nbd_common.sh@41 -- # break 00:12:34.580 16:51:23 -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.580 16:51:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.580 16:51:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:34.880 16:51:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:34.880 16:51:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:34.880 16:51:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:34.880 16:51:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.880 16:51:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.880 16:51:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:34.880 16:51:23 -- bdev/nbd_common.sh@41 -- # break 00:12:34.880 16:51:23 -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.880 16:51:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.880 16:51:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:35.139 16:51:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:35.139 16:51:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:35.139 16:51:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:35.139 16:51:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.139 16:51:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.139 16:51:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:35.139 16:51:23 -- bdev/nbd_common.sh@41 -- # break 00:12:35.139 16:51:23 -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.139 16:51:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.139 16:51:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:35.139 16:51:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:35.139 16:51:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:35.139 16:51:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:35.139 16:51:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.139 16:51:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.139 16:51:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:35.398 16:51:24 -- bdev/nbd_common.sh@41 -- # break 00:12:35.398 16:51:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.398 16:51:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.398 16:51:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:35.398 16:51:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:35.398 16:51:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:35.398 16:51:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:35.398 16:51:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.398 16:51:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.398 16:51:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:35.398 16:51:24 -- bdev/nbd_common.sh@41 -- # break 00:12:35.398 16:51:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.398 16:51:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.398 16:51:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:35.656 16:51:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:35.656 16:51:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:35.656 16:51:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:35.656 16:51:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.656 16:51:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.656 16:51:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:35.656 16:51:24 -- bdev/nbd_common.sh@41 -- # break 00:12:35.656 16:51:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.656 16:51:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.656 16:51:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:35.914 16:51:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:35.914 16:51:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:35.914 16:51:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:35.914 16:51:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.914 16:51:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.914 16:51:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:35.914 16:51:24 -- bdev/nbd_common.sh@41 -- # break 00:12:35.914 16:51:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.914 16:51:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.914 16:51:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:36.173 16:51:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:36.173 16:51:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:36.173 16:51:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:36.173 16:51:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.173 16:51:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.173 16:51:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:36.173 16:51:24 -- bdev/nbd_common.sh@41 -- # break 00:12:36.173 16:51:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.173 16:51:24 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:36.173 16:51:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.173 16:51:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:36.431 16:51:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:36.431 16:51:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:36.431 16:51:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:36.431 16:51:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:36.431 16:51:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:36.431 16:51:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:36.431 16:51:25 -- bdev/nbd_common.sh@65 -- # true 00:12:36.431 16:51:25 -- bdev/nbd_common.sh@65 -- # count=0 00:12:36.431 16:51:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:36.431 16:51:25 -- bdev/nbd_common.sh@104 -- # count=0 00:12:36.431 16:51:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:36.431 16:51:25 -- bdev/nbd_common.sh@109 -- # return 0 00:12:36.431 16:51:25 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:36.431 16:51:25 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.431 16:51:25 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:36.431 16:51:25 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:36.431 16:51:25 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:36.431 16:51:25 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:36.710 malloc_lvol_verify 00:12:36.710 16:51:25 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:36.967 fd50101f-76e1-404c-a30f-96bfbe921a62 00:12:36.968 16:51:25 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:37.225 0c5228a0-3cd5-45a0-8f53-67416a38fd26 00:12:37.225 16:51:25 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:37.483 /dev/nbd0 00:12:37.483 16:51:26 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:37.483 mke2fs 1.46.5 (30-Dec-2021) 00:12:37.483 00:12:37.483 Filesystem too small for a journal 00:12:37.483 Discarding device blocks: 0/1024 done 00:12:37.483 Creating filesystem with 1024 4k blocks and 1024 inodes 00:12:37.483 00:12:37.483 Allocating group tables: 0/1 done 00:12:37.483 Writing inode tables: 0/1 done 00:12:37.483 Writing superblocks and filesystem accounting information: 0/1 done 00:12:37.483 00:12:37.483 16:51:26 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:37.483 16:51:26 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:37.483 16:51:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.483 16:51:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:37.483 16:51:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:37.483 16:51:26 -- bdev/nbd_common.sh@51 -- # local i 00:12:37.483 16:51:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.483 16:51:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:37.742 16:51:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:37.742 16:51:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:37.742 16:51:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:37.742 16:51:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.742 16:51:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.742 16:51:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:37.742 16:51:26 -- bdev/nbd_common.sh@41 -- # break 00:12:37.742 16:51:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.742 16:51:26 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:37.742 16:51:26 -- bdev/nbd_common.sh@147 -- # return 0 00:12:37.742 16:51:26 -- bdev/blockdev.sh@324 -- # killprocess 108752 00:12:37.742 16:51:26 -- common/autotest_common.sh@936 -- # '[' -z 108752 ']' 00:12:37.742 16:51:26 -- common/autotest_common.sh@940 -- # kill -0 108752 00:12:37.742 16:51:26 -- common/autotest_common.sh@941 -- # uname 00:12:37.742 16:51:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:37.742 16:51:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108752 00:12:37.742 16:51:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:37.742 16:51:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:37.742 16:51:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 108752' 00:12:37.742 killing process with pid 108752 00:12:37.742 16:51:26 -- common/autotest_common.sh@955 -- # kill 108752 00:12:37.742 16:51:26 -- common/autotest_common.sh@960 -- # wait 108752 00:12:39.645 16:51:28 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:39.645 00:12:39.645 real 0m24.270s 00:12:39.645 user 0m33.771s 00:12:39.645 sys 0m7.995s 00:12:39.645 16:51:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:39.645 ************************************ 00:12:39.645 16:51:28 -- common/autotest_common.sh@10 -- # set +x 00:12:39.645 END TEST bdev_nbd 00:12:39.645 ************************************ 00:12:39.646 16:51:28 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:39.646 16:51:28 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:39.646 16:51:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:39.646 16:51:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:39.646 16:51:28 -- common/autotest_common.sh@10 -- # set +x 00:12:39.646 ************************************ 00:12:39.646 START TEST bdev_fio 00:12:39.646 ************************************ 00:12:39.646 16:51:28 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@329 -- # local env_context 00:12:39.646 16:51:28 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:39.646 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:39.646 16:51:28 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:39.646 16:51:28 -- bdev/blockdev.sh@337 -- # echo '' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:39.646 16:51:28 -- bdev/blockdev.sh@337 -- # env_context= 00:12:39.646 16:51:28 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:39.646 16:51:28 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:39.646 16:51:28 -- common/autotest_common.sh@1270 -- # local workload=verify 00:12:39.646 16:51:28 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:12:39.646 16:51:28 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:39.646 16:51:28 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:39.646 16:51:28 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:39.646 16:51:28 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:12:39.646 16:51:28 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:39.646 16:51:28 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:39.646 16:51:28 -- common/autotest_common.sh@1290 -- # cat 00:12:39.646 16:51:28 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:12:39.646 16:51:28 -- common/autotest_common.sh@1303 -- # cat 00:12:39.646 16:51:28 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:12:39.646 16:51:28 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:12:39.646 16:51:28 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:39.646 16:51:28 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:12:39.646 16:51:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:39.646 16:51:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:12:39.646 16:51:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:39.646 16:51:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:12:39.646 16:51:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:39.646 16:51:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:12:39.646 16:51:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:39.646 16:51:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:12:39.646 16:51:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:39.646 16:51:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:12:39.646 16:51:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:39.646 16:51:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:12:39.646 16:51:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:39.646 16:51:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:12:39.646 16:51:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:39.646 16:51:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:12:39.646 16:51:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:39.646 16:51:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:12:39.646 16:51:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:39.646 16:51:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:12:39.646 16:51:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:39.646 16:51:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:12:39.646 16:51:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:39.646 16:51:28 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:12:39.646 16:51:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:39.646 16:51:28 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:12:39.646 16:51:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:39.646 16:51:28 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:12:39.646 16:51:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:39.646 16:51:28 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:12:39.646 16:51:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:39.646 16:51:28 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:12:39.646 16:51:28 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:39.646 16:51:28 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:39.646 16:51:28 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:39.646 16:51:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:39.646 16:51:28 -- common/autotest_common.sh@10 -- # set +x 00:12:39.646 ************************************ 00:12:39.646 START TEST bdev_fio_rw_verify 00:12:39.646 ************************************ 00:12:39.646 16:51:28 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:39.646 16:51:28 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:39.646 16:51:28 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:39.646 16:51:28 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:39.646 16:51:28 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:39.646 16:51:28 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.646 16:51:28 -- common/autotest_common.sh@1330 -- # shift 00:12:39.646 16:51:28 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:39.646 16:51:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:39.646 16:51:28 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.646 16:51:28 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:39.646 16:51:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:39.646 16:51:28 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:12:39.646 16:51:28 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:12:39.646 16:51:28 -- common/autotest_common.sh@1336 -- # break 00:12:39.646 16:51:28 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:39.646 16:51:28 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:39.646 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.646 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.646 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.646 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.646 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.646 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.646 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.646 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.646 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.646 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.646 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.646 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.646 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.646 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.646 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.646 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:39.646 fio-3.35 00:12:39.646 Starting 16 threads 00:12:51.849 00:12:51.849 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=109915: Tue Nov 5 16:51:39 2024 00:12:51.849 read: IOPS=78.5k, BW=307MiB/s (321MB/s)(3066MiB/10001msec) 00:12:51.849 slat (usec): min=2, max=52032, avg=36.09, stdev=461.80 00:12:51.849 clat (usec): min=8, max=52210, avg=289.52, stdev=1313.32 00:12:51.849 lat (usec): min=18, max=52226, avg=325.61, stdev=1391.40 00:12:51.849 clat percentiles (usec): 00:12:51.849 | 50.000th=[ 167], 99.000th=[ 734], 99.900th=[16450], 99.990th=[28443], 00:12:51.849 | 99.999th=[45876] 00:12:51.849 write: IOPS=124k, BW=483MiB/s (507MB/s)(4798MiB/9925msec); 0 zone resets 00:12:51.849 slat (usec): min=4, max=48058, avg=64.64, stdev=632.63 00:12:51.849 clat (usec): min=9, max=51405, avg=371.93, stdev=1475.02 00:12:51.849 lat (usec): min=39, max=51434, avg=436.58, stdev=1605.60 00:12:51.849 clat percentiles (usec): 00:12:51.849 | 50.000th=[ 215], 99.000th=[ 5211], 99.900th=[17433], 99.990th=[32113], 00:12:51.849 | 99.999th=[44303] 00:12:51.849 bw ( KiB/s): min=286112, max=772136, per=99.41%, avg=492084.42, stdev=8481.82, samples=304 00:12:51.849 iops : min=71528, max=193034, avg=123021.11, stdev=2120.45, samples=304 00:12:51.849 lat (usec) : 10=0.01%, 20=0.01%, 50=0.89%, 100=12.95%, 250=56.28% 00:12:51.849 lat (usec) : 500=26.61%, 750=1.79%, 1000=0.28% 00:12:51.849 lat (msec) : 2=0.15%, 4=0.08%, 10=0.19%, 20=0.67%, 50=0.08% 00:12:51.849 lat (msec) : 100=0.01% 00:12:51.849 cpu : usr=55.99%, sys=2.30%, ctx=227354, majf=3, minf=85216 00:12:51.849 IO depths : 1=11.2%, 2=23.7%, 4=51.9%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:51.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.849 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.849 issued rwts: total=784824,1228199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:51.849 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:51.849 00:12:51.849 Run status group 0 (all jobs): 00:12:51.849 READ: bw=307MiB/s (321MB/s), 307MiB/s-307MiB/s (321MB/s-321MB/s), io=3066MiB (3215MB), run=10001-10001msec 00:12:51.849 WRITE: bw=483MiB/s (507MB/s), 483MiB/s-483MiB/s (507MB/s-507MB/s), io=4798MiB (5031MB), run=9925-9925msec 00:12:53.230 ----------------------------------------------------- 00:12:53.230 Suppressions used: 00:12:53.230 count bytes template 00:12:53.230 16 140 /usr/src/fio/parse.c 00:12:53.230 11722 1125312 /usr/src/fio/iolog.c 00:12:53.230 1 904 libcrypto.so 00:12:53.230 ----------------------------------------------------- 00:12:53.230 00:12:53.230 00:12:53.230 real 0m13.609s 00:12:53.230 user 1m34.781s 00:12:53.231 sys 0m4.635s 00:12:53.231 16:51:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:53.231 16:51:41 -- common/autotest_common.sh@10 -- # set +x 00:12:53.231 ************************************ 00:12:53.231 END TEST bdev_fio_rw_verify 00:12:53.231 ************************************ 00:12:53.231 16:51:41 -- bdev/blockdev.sh@348 -- # rm -f 00:12:53.231 16:51:41 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:53.231 16:51:41 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:53.231 16:51:41 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:53.231 16:51:41 -- common/autotest_common.sh@1270 -- # local workload=trim 00:12:53.231 16:51:41 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:12:53.231 16:51:41 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:53.231 16:51:41 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:53.231 16:51:41 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:53.231 16:51:41 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:12:53.231 16:51:41 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:53.231 16:51:41 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:53.231 16:51:41 -- common/autotest_common.sh@1290 -- # cat 00:12:53.231 16:51:41 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:12:53.231 16:51:41 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:12:53.231 16:51:41 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:12:53.231 16:51:41 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:53.232 16:51:41 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "4e12e0fb-d81f-4b2f-8504-6bfa62665d5d"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4e12e0fb-d81f-4b2f-8504-6bfa62665d5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f35f2b77-53a5-51c9-b1c0-77c12c430197"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f35f2b77-53a5-51c9-b1c0-77c12c430197",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "23dbd07a-ad13-5aca-8d1d-0b717ea89880"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "23dbd07a-ad13-5aca-8d1d-0b717ea89880",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "4be6b632-edf3-5c86-80e3-78ab731ddc60"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4be6b632-edf3-5c86-80e3-78ab731ddc60",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "ca101dd3-1e1c-59d9-ae30-0a377e858687"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ca101dd3-1e1c-59d9-ae30-0a377e858687",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "45f4ebc0-b496-5fbf-a319-73f544d65faf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "45f4ebc0-b496-5fbf-a319-73f544d65faf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "38316be1-1e51-5b68-ab60-d2c8ee12e3c1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "38316be1-1e51-5b68-ab60-d2c8ee12e3c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "5899ac10-08d2-5ced-b936-b693b46c9b42"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5899ac10-08d2-5ced-b936-b693b46c9b42",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "039600ae-153f-5357-8010-4c756a4def25"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "039600ae-153f-5357-8010-4c756a4def25",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "c49b79c9-6f58-5321-9158-bd3e71ac16eb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c49b79c9-6f58-5321-9158-bd3e71ac16eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "7c4a1f13-6856-5380-8056-b951f33d017d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7c4a1f13-6856-5380-8056-b951f33d017d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "97aafead-569b-5716-9832-5c8bc06a2bce"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "97aafead-569b-5716-9832-5c8bc06a2bce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "b5747af3-280b-42bc-97eb-37367a575557"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b5747af3-280b-42bc-97eb-37367a575557",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b5747af3-280b-42bc-97eb-37367a575557",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "7ed1fc83-6142-4d98-8929-27be6d796fb4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "ccc30e1b-0385-49eb-8cd3-04d8e441b556",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "32dff25a-8393-4243-beb8-e5e01a7fba76"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "32dff25a-8393-4243-beb8-e5e01a7fba76",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "32dff25a-8393-4243-beb8-e5e01a7fba76",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "47eec94e-b2bc-4792-bc79-fc2b46133391",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "aff1333d-79bf-4cfe-a3b6-fa88efab46ef",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "2648913c-741e-40b7-9168-a285a4bc74d6"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2648913c-741e-40b7-9168-a285a4bc74d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2648913c-741e-40b7-9168-a285a4bc74d6",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "6ba833c9-5d7d-4bcf-8bf0-1e4b30a02dcd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "6aacdb4d-c674-4bcd-9fb2-dd1509ff86d6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "0208e279-a692-45a2-bfa5-0fc944d27fee"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "0208e279-a692-45a2-bfa5-0fc944d27fee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:53.232 16:51:41 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:12:53.232 Malloc1p0 00:12:53.232 Malloc1p1 00:12:53.232 Malloc2p0 00:12:53.232 Malloc2p1 00:12:53.232 Malloc2p2 00:12:53.232 Malloc2p3 00:12:53.232 Malloc2p4 00:12:53.232 Malloc2p5 00:12:53.232 Malloc2p6 00:12:53.232 Malloc2p7 00:12:53.232 TestPT 00:12:53.232 raid0 00:12:53.232 concat0 ]] 00:12:53.232 16:51:41 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:53.233 16:51:41 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "4e12e0fb-d81f-4b2f-8504-6bfa62665d5d"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4e12e0fb-d81f-4b2f-8504-6bfa62665d5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f35f2b77-53a5-51c9-b1c0-77c12c430197"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f35f2b77-53a5-51c9-b1c0-77c12c430197",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "23dbd07a-ad13-5aca-8d1d-0b717ea89880"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "23dbd07a-ad13-5aca-8d1d-0b717ea89880",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "4be6b632-edf3-5c86-80e3-78ab731ddc60"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4be6b632-edf3-5c86-80e3-78ab731ddc60",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "ca101dd3-1e1c-59d9-ae30-0a377e858687"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ca101dd3-1e1c-59d9-ae30-0a377e858687",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "45f4ebc0-b496-5fbf-a319-73f544d65faf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "45f4ebc0-b496-5fbf-a319-73f544d65faf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "38316be1-1e51-5b68-ab60-d2c8ee12e3c1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "38316be1-1e51-5b68-ab60-d2c8ee12e3c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "5899ac10-08d2-5ced-b936-b693b46c9b42"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5899ac10-08d2-5ced-b936-b693b46c9b42",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "039600ae-153f-5357-8010-4c756a4def25"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "039600ae-153f-5357-8010-4c756a4def25",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "c49b79c9-6f58-5321-9158-bd3e71ac16eb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c49b79c9-6f58-5321-9158-bd3e71ac16eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "7c4a1f13-6856-5380-8056-b951f33d017d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7c4a1f13-6856-5380-8056-b951f33d017d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "97aafead-569b-5716-9832-5c8bc06a2bce"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "97aafead-569b-5716-9832-5c8bc06a2bce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "b5747af3-280b-42bc-97eb-37367a575557"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b5747af3-280b-42bc-97eb-37367a575557",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b5747af3-280b-42bc-97eb-37367a575557",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "7ed1fc83-6142-4d98-8929-27be6d796fb4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "ccc30e1b-0385-49eb-8cd3-04d8e441b556",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "32dff25a-8393-4243-beb8-e5e01a7fba76"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "32dff25a-8393-4243-beb8-e5e01a7fba76",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "32dff25a-8393-4243-beb8-e5e01a7fba76",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "47eec94e-b2bc-4792-bc79-fc2b46133391",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "aff1333d-79bf-4cfe-a3b6-fa88efab46ef",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "2648913c-741e-40b7-9168-a285a4bc74d6"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2648913c-741e-40b7-9168-a285a4bc74d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2648913c-741e-40b7-9168-a285a4bc74d6",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "6ba833c9-5d7d-4bcf-8bf0-1e4b30a02dcd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "6aacdb4d-c674-4bcd-9fb2-dd1509ff86d6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "0208e279-a692-45a2-bfa5-0fc944d27fee"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "0208e279-a692-45a2-bfa5-0fc944d27fee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:53.233 16:51:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:53.233 16:51:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:12:53.233 16:51:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:12:53.233 16:51:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:53.233 16:51:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:12:53.233 16:51:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:12:53.233 16:51:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:53.233 16:51:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:12:53.233 16:51:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:12:53.233 16:51:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:53.233 16:51:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:12:53.233 16:51:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:12:53.233 16:51:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:53.233 16:51:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:12:53.233 16:51:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:12:53.233 16:51:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:53.233 16:51:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:12:53.233 16:51:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:12:53.233 16:51:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:53.233 16:51:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:12:53.233 16:51:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:12:53.233 16:51:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:53.233 16:51:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:12:53.233 16:51:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:12:53.233 16:51:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:53.233 16:51:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:12:53.233 16:51:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:12:53.233 16:51:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:53.233 16:51:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:12:53.233 16:51:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:12:53.233 16:51:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:53.233 16:51:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:12:53.233 16:51:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:12:53.233 16:51:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:53.234 16:51:42 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:12:53.234 16:51:42 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:12:53.234 16:51:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:53.234 16:51:42 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:12:53.234 16:51:42 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:12:53.234 16:51:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:53.234 16:51:42 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:12:53.234 16:51:42 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:12:53.234 16:51:42 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:53.234 16:51:42 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:53.234 16:51:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:53.234 16:51:42 -- common/autotest_common.sh@10 -- # set +x 00:12:53.234 ************************************ 00:12:53.234 START TEST bdev_fio_trim 00:12:53.234 ************************************ 00:12:53.234 16:51:42 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:53.234 16:51:42 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:53.234 16:51:42 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:53.234 16:51:42 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:53.234 16:51:42 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:53.234 16:51:42 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:53.234 16:51:42 -- common/autotest_common.sh@1330 -- # shift 00:12:53.234 16:51:42 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:53.234 16:51:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:53.234 16:51:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:53.234 16:51:42 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:53.234 16:51:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:53.234 16:51:42 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:12:53.234 16:51:42 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:12:53.234 16:51:42 -- common/autotest_common.sh@1336 -- # break 00:12:53.234 16:51:42 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:53.234 16:51:42 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:53.493 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.493 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.493 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.493 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.493 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.493 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.493 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.493 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.493 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.493 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.493 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.493 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.493 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.493 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.493 fio-3.35 00:12:53.493 Starting 14 threads 00:13:05.756 00:13:05.756 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=110134: Tue Nov 5 16:51:53 2024 00:13:05.756 write: IOPS=149k, BW=581MiB/s (609MB/s)(5818MiB/10016msec); 0 zone resets 00:13:05.756 slat (usec): min=2, max=36698, avg=33.95, stdev=394.63 00:13:05.756 clat (usec): min=16, max=36896, avg=241.32, stdev=1057.91 00:13:05.756 lat (usec): min=38, max=36925, avg=275.27, stdev=1128.56 00:13:05.756 clat percentiles (usec): 00:13:05.756 | 50.000th=[ 157], 99.000th=[ 498], 99.900th=[16188], 99.990th=[20317], 00:13:05.756 | 99.999th=[28181] 00:13:05.756 bw ( KiB/s): min=368104, max=936379, per=100.00%, avg=598817.08, stdev=11802.59, samples=267 00:13:05.756 iops : min=92026, max=234094, avg=149704.13, stdev=2950.65, samples=267 00:13:05.756 trim: IOPS=149k, BW=581MiB/s (609MB/s)(5818MiB/10016msec); 0 zone resets 00:13:05.756 slat (usec): min=4, max=28033, avg=22.83, stdev=316.76 00:13:05.756 clat (usec): min=4, max=36926, avg=253.98, stdev=1073.36 00:13:05.756 lat (usec): min=13, max=36945, avg=276.81, stdev=1118.93 00:13:05.756 clat percentiles (usec): 00:13:05.756 | 50.000th=[ 174], 99.000th=[ 396], 99.900th=[16188], 99.990th=[20841], 00:13:05.756 | 99.999th=[28181] 00:13:05.756 bw ( KiB/s): min=368168, max=936443, per=100.00%, avg=598820.45, stdev=11802.95, samples=267 00:13:05.756 iops : min=92042, max=234110, avg=149704.97, stdev=2950.74, samples=267 00:13:05.756 lat (usec) : 10=0.14%, 20=0.36%, 50=1.38%, 100=12.87%, 250=69.91% 00:13:05.756 lat (usec) : 500=14.58%, 750=0.24%, 1000=0.01% 00:13:05.756 lat (msec) : 2=0.01%, 4=0.01%, 10=0.04%, 20=0.42%, 50=0.02% 00:13:05.756 cpu : usr=68.89%, sys=0.62%, ctx=171005, majf=0, minf=878 00:13:05.756 IO depths : 1=12.3%, 2=24.5%, 4=50.0%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:05.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.756 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.756 issued rwts: total=0,1489354,1489359,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.756 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:05.756 00:13:05.756 Run status group 0 (all jobs): 00:13:05.756 WRITE: bw=581MiB/s (609MB/s), 581MiB/s-581MiB/s (609MB/s-609MB/s), io=5818MiB (6100MB), run=10016-10016msec 00:13:05.756 TRIM: bw=581MiB/s (609MB/s), 581MiB/s-581MiB/s (609MB/s-609MB/s), io=5818MiB (6100MB), run=10016-10016msec 00:13:06.690 ----------------------------------------------------- 00:13:06.690 Suppressions used: 00:13:06.690 count bytes template 00:13:06.690 14 129 /usr/src/fio/parse.c 00:13:06.690 1 904 libcrypto.so 00:13:06.690 ----------------------------------------------------- 00:13:06.690 00:13:06.690 00:13:06.690 real 0m13.315s 00:13:06.690 user 1m41.165s 00:13:06.690 sys 0m1.829s 00:13:06.690 16:51:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:06.690 16:51:55 -- common/autotest_common.sh@10 -- # set +x 00:13:06.690 ************************************ 00:13:06.690 END TEST bdev_fio_trim 00:13:06.690 ************************************ 00:13:06.690 16:51:55 -- bdev/blockdev.sh@366 -- # rm -f 00:13:06.690 16:51:55 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:06.690 16:51:55 -- bdev/blockdev.sh@368 -- # popd 00:13:06.690 /home/vagrant/spdk_repo/spdk 00:13:06.690 16:51:55 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:13:06.690 00:13:06.690 real 0m27.242s 00:13:06.690 user 3m16.176s 00:13:06.690 sys 0m6.543s 00:13:06.690 16:51:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:06.690 16:51:55 -- common/autotest_common.sh@10 -- # set +x 00:13:06.690 ************************************ 00:13:06.690 END TEST bdev_fio 00:13:06.690 ************************************ 00:13:06.690 16:51:55 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:06.690 16:51:55 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:06.690 16:51:55 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:13:06.690 16:51:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:06.690 16:51:55 -- common/autotest_common.sh@10 -- # set +x 00:13:06.690 ************************************ 00:13:06.690 START TEST bdev_verify 00:13:06.690 ************************************ 00:13:06.690 16:51:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:06.690 [2024-11-05 16:51:55.518926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:06.690 [2024-11-05 16:51:55.519139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110315 ] 00:13:06.966 [2024-11-05 16:51:55.673118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:06.966 [2024-11-05 16:51:55.840724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.966 [2024-11-05 16:51:55.840731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.542 [2024-11-05 16:51:56.188423] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:07.542 [2024-11-05 16:51:56.188538] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:07.542 [2024-11-05 16:51:56.196363] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:07.542 [2024-11-05 16:51:56.196453] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:07.542 [2024-11-05 16:51:56.204428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:07.542 [2024-11-05 16:51:56.204489] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:07.542 [2024-11-05 16:51:56.204551] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:07.542 [2024-11-05 16:51:56.383350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:07.542 [2024-11-05 16:51:56.383502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.542 [2024-11-05 16:51:56.383578] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:07.542 [2024-11-05 16:51:56.383601] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.542 [2024-11-05 16:51:56.386514] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.542 [2024-11-05 16:51:56.386567] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:08.109 Running I/O for 5 seconds... 00:13:13.376 00:13:13.376 Latency(us) 00:13:13.376 [2024-11-05T16:52:02.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x0 length 0x1000 00:13:13.376 Malloc0 : 5.15 1737.94 6.79 0.00 0.00 73198.43 1906.50 135361.63 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x1000 length 0x1000 00:13:13.376 Malloc0 : 5.16 1723.48 6.73 0.00 0.00 73252.52 1906.50 112483.61 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x0 length 0x800 00:13:13.376 Malloc1p0 : 5.15 1183.83 4.62 0.00 0.00 107430.48 3753.43 129642.12 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x800 length 0x800 00:13:13.376 Malloc1p0 : 5.17 1181.20 4.61 0.00 0.00 106861.10 3678.95 107717.35 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x0 length 0x800 00:13:13.376 Malloc1p1 : 5.16 1183.54 4.62 0.00 0.00 107276.86 3664.06 125829.12 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x800 length 0x800 00:13:13.376 Malloc1p1 : 5.17 1180.75 4.61 0.00 0.00 106739.65 3574.69 104380.97 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x0 length 0x200 00:13:13.376 Malloc2p0 : 5.16 1183.03 4.62 0.00 0.00 107144.85 3381.06 122016.12 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x200 length 0x200 00:13:13.376 Malloc2p0 : 5.17 1180.31 4.61 0.00 0.00 106615.23 3559.80 100567.97 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x0 length 0x200 00:13:13.376 Malloc2p1 : 5.16 1182.50 4.62 0.00 0.00 107031.54 3768.32 118679.74 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x200 length 0x200 00:13:13.376 Malloc2p1 : 5.17 1179.87 4.61 0.00 0.00 106498.02 3693.85 97708.22 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x0 length 0x200 00:13:13.376 Malloc2p2 : 5.16 1181.99 4.62 0.00 0.00 106898.98 3410.85 115819.99 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x200 length 0x200 00:13:13.376 Malloc2p2 : 5.17 1179.42 4.61 0.00 0.00 106362.77 3559.80 93895.21 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x0 length 0x200 00:13:13.376 Malloc2p3 : 5.16 1181.47 4.62 0.00 0.00 106786.94 3470.43 112960.23 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x200 length 0x200 00:13:13.376 Malloc2p3 : 5.18 1178.96 4.61 0.00 0.00 106235.34 2859.75 91512.09 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x0 length 0x200 00:13:13.376 Malloc2p4 : 5.17 1181.04 4.61 0.00 0.00 106668.95 3574.69 109623.85 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x200 length 0x200 00:13:13.376 Malloc2p4 : 5.18 1178.64 4.60 0.00 0.00 106146.27 3872.58 87699.08 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x0 length 0x200 00:13:13.376 Malloc2p5 : 5.17 1180.60 4.61 0.00 0.00 106535.61 3678.95 106287.48 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x200 length 0x200 00:13:13.376 Malloc2p5 : 5.18 1178.37 4.60 0.00 0.00 105998.03 4170.47 83409.45 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x0 length 0x200 00:13:13.376 Malloc2p6 : 5.17 1180.16 4.61 0.00 0.00 106404.38 3649.16 102951.10 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x200 length 0x200 00:13:13.376 Malloc2p6 : 5.18 1178.09 4.60 0.00 0.00 105848.49 4081.11 79596.45 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x0 length 0x200 00:13:13.376 Malloc2p7 : 5.17 1179.73 4.61 0.00 0.00 106297.36 3798.11 99614.72 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x200 length 0x200 00:13:13.376 Malloc2p7 : 5.18 1192.59 4.66 0.00 0.00 104784.61 1712.87 80549.70 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x0 length 0x1000 00:13:13.376 TestPT : 5.17 1170.01 4.57 0.00 0.00 107019.00 4081.11 99138.09 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x1000 length 0x1000 00:13:13.376 TestPT : 5.19 1176.63 4.60 0.00 0.00 106017.84 2785.28 80073.08 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x0 length 0x2000 00:13:13.376 raid0 : 5.18 1193.03 4.66 0.00 0.00 105251.36 3872.58 92941.96 00:13:13.376 [2024-11-05T16:52:02.253Z] Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:13.376 Verification LBA range: start 0x2000 length 0x2000 00:13:13.377 raid0 : 5.16 1183.60 4.62 0.00 0.00 107657.35 3619.37 128688.87 00:13:13.377 [2024-11-05T16:52:02.254Z] Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:13.377 Verification LBA range: start 0x0 length 0x2000 00:13:13.377 concat0 : 5.18 1192.79 4.66 0.00 0.00 105089.33 4200.26 88652.33 00:13:13.377 [2024-11-05T16:52:02.254Z] Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:13.377 Verification LBA range: start 0x2000 length 0x2000 00:13:13.377 concat0 : 5.16 1183.08 4.62 0.00 0.00 107548.31 4021.53 124875.87 00:13:13.377 [2024-11-05T16:52:02.254Z] Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:13.377 Verification LBA range: start 0x0 length 0x1000 00:13:13.377 raid1 : 5.18 1192.55 4.66 0.00 0.00 104942.02 3842.79 83409.45 00:13:13.377 [2024-11-05T16:52:02.254Z] Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:13.377 Verification LBA range: start 0x1000 length 0x1000 00:13:13.377 raid1 : 5.16 1182.54 4.62 0.00 0.00 107410.70 4468.36 120586.24 00:13:13.377 [2024-11-05T16:52:02.254Z] Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:13.377 Verification LBA range: start 0x0 length 0x4e2 00:13:13.377 AIO0 : 5.19 1191.68 4.65 0.00 0.00 104725.56 4766.25 80073.08 00:13:13.377 [2024-11-05T16:52:02.254Z] Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:13.377 Verification LBA range: start 0x4e2 length 0x4e2 00:13:13.377 AIO0 : 5.16 1181.68 4.62 0.00 0.00 107223.75 7268.54 111530.36 00:13:13.377 [2024-11-05T16:52:02.254Z] =================================================================================================================== 00:13:13.377 [2024-11-05T16:52:02.254Z] Total : 38935.08 152.09 0.00 0.00 103497.75 1712.87 135361.63 00:13:15.281 00:13:15.281 real 0m8.416s 00:13:15.281 user 0m14.802s 00:13:15.281 sys 0m0.601s 00:13:15.281 16:52:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:15.281 16:52:03 -- common/autotest_common.sh@10 -- # set +x 00:13:15.281 ************************************ 00:13:15.281 END TEST bdev_verify 00:13:15.281 ************************************ 00:13:15.281 16:52:03 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:15.281 16:52:03 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:13:15.281 16:52:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:15.281 16:52:03 -- common/autotest_common.sh@10 -- # set +x 00:13:15.281 ************************************ 00:13:15.281 START TEST bdev_verify_big_io 00:13:15.281 ************************************ 00:13:15.281 16:52:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:15.281 [2024-11-05 16:52:03.988806] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:15.281 [2024-11-05 16:52:03.989026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110440 ] 00:13:15.281 [2024-11-05 16:52:04.161628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:15.539 [2024-11-05 16:52:04.330500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.539 [2024-11-05 16:52:04.330509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.797 [2024-11-05 16:52:04.684120] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:15.797 [2024-11-05 16:52:04.684226] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:16.056 [2024-11-05 16:52:04.692072] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:16.056 [2024-11-05 16:52:04.692159] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:16.056 [2024-11-05 16:52:04.700069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:16.056 [2024-11-05 16:52:04.700136] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:16.056 [2024-11-05 16:52:04.700211] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:16.056 [2024-11-05 16:52:04.886476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:16.056 [2024-11-05 16:52:04.886615] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.056 [2024-11-05 16:52:04.886665] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:16.056 [2024-11-05 16:52:04.886688] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.056 [2024-11-05 16:52:04.889219] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.056 [2024-11-05 16:52:04.889276] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:16.623 [2024-11-05 16:52:05.230951] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:16.623 [2024-11-05 16:52:05.234222] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:16.623 [2024-11-05 16:52:05.237934] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:16.623 [2024-11-05 16:52:05.241628] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:16.623 [2024-11-05 16:52:05.244732] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:16.623 [2024-11-05 16:52:05.248415] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:16.623 [2024-11-05 16:52:05.251525] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:16.623 [2024-11-05 16:52:05.255287] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:16.623 [2024-11-05 16:52:05.258342] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:16.623 [2024-11-05 16:52:05.262020] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:16.623 [2024-11-05 16:52:05.265129] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:16.623 [2024-11-05 16:52:05.268847] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:16.623 [2024-11-05 16:52:05.272138] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:16.623 [2024-11-05 16:52:05.275817] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:16.623 [2024-11-05 16:52:05.279533] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:16.623 [2024-11-05 16:52:05.282581] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:16.623 [2024-11-05 16:52:05.356509] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:16.623 [2024-11-05 16:52:05.362611] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:16.623 Running I/O for 5 seconds... 00:13:23.185 00:13:23.185 Latency(us) 00:13:23.185 [2024-11-05T16:52:12.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x0 length 0x100 00:13:23.185 Malloc0 : 5.62 383.97 24.00 0.00 0.00 326642.20 20137.43 892242.85 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x100 length 0x100 00:13:23.185 Malloc0 : 5.79 387.66 24.23 0.00 0.00 303494.66 17754.30 606267.58 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x0 length 0x80 00:13:23.185 Malloc1p0 : 5.70 162.51 10.16 0.00 0.00 757277.81 43849.54 1776859.69 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x80 length 0x80 00:13:23.185 Malloc1p0 : 5.86 129.44 8.09 0.00 0.00 900952.58 40513.16 2013265.92 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x0 length 0x80 00:13:23.185 Malloc1p1 : 5.83 124.21 7.76 0.00 0.00 968923.92 42657.98 1967509.88 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x80 length 0x80 00:13:23.185 Malloc1p1 : 5.90 128.54 8.03 0.00 0.00 887382.59 38844.97 2028517.93 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x0 length 0x20 00:13:23.185 Malloc2p0 : 5.62 69.38 4.34 0.00 0.00 433180.13 7119.59 701592.67 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x20 length 0x20 00:13:23.185 Malloc2p0 : 5.79 70.46 4.40 0.00 0.00 400357.79 7685.59 421336.90 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x0 length 0x20 00:13:23.185 Malloc2p1 : 5.62 69.36 4.34 0.00 0.00 431470.92 6821.70 686340.65 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x20 length 0x20 00:13:23.185 Malloc2p1 : 5.83 73.55 4.60 0.00 0.00 384199.70 7864.32 402271.88 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x0 length 0x20 00:13:23.185 Malloc2p2 : 5.62 69.35 4.33 0.00 0.00 429756.98 6464.23 674901.64 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x20 length 0x20 00:13:23.185 Malloc2p2 : 5.83 73.54 4.60 0.00 0.00 382556.02 7566.43 387019.87 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x0 length 0x20 00:13:23.185 Malloc2p3 : 5.63 69.33 4.33 0.00 0.00 428224.79 6851.49 663462.63 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x20 length 0x20 00:13:23.185 Malloc2p3 : 5.84 73.52 4.60 0.00 0.00 380903.66 7089.80 373674.36 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x0 length 0x20 00:13:23.185 Malloc2p4 : 5.63 69.32 4.33 0.00 0.00 426509.43 8817.57 648210.62 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x20 length 0x20 00:13:23.185 Malloc2p4 : 5.84 73.51 4.59 0.00 0.00 379386.85 7298.33 360328.84 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x0 length 0x20 00:13:23.185 Malloc2p5 : 5.63 69.30 4.33 0.00 0.00 424392.65 8519.68 629145.60 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x20 length 0x20 00:13:23.185 Malloc2p5 : 5.84 73.49 4.59 0.00 0.00 377858.30 7060.01 352702.84 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x0 length 0x20 00:13:23.185 Malloc2p6 : 5.63 69.29 4.33 0.00 0.00 422387.39 8460.10 613893.59 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x20 length 0x20 00:13:23.185 Malloc2p6 : 5.84 73.47 4.59 0.00 0.00 376277.81 7119.59 354609.34 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x0 length 0x20 00:13:23.185 Malloc2p7 : 5.70 71.57 4.47 0.00 0.00 408088.50 8043.05 598641.57 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x20 length 0x20 00:13:23.185 Malloc2p7 : 5.85 77.09 4.82 0.00 0.00 358705.90 7208.96 354609.34 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x0 length 0x100 00:13:23.185 TestPT : 5.85 124.60 7.79 0.00 0.00 921936.49 59101.56 2043769.95 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x100 length 0x100 00:13:23.185 TestPT : 5.86 137.42 8.59 0.00 0.00 799060.91 8996.31 2028517.93 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x0 length 0x200 00:13:23.185 raid0 : 5.85 129.52 8.10 0.00 0.00 875119.55 43372.92 1982761.89 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x200 length 0x200 00:13:23.185 raid0 : 5.52 265.09 16.57 0.00 0.00 464308.29 41228.10 796917.76 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x0 length 0x200 00:13:23.185 concat0 : 5.86 135.89 8.49 0.00 0.00 824512.79 42419.67 1998013.91 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x200 length 0x200 00:13:23.185 concat0 : 5.86 123.55 7.72 0.00 0.00 990479.04 42181.35 2028517.93 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x0 length 0x100 00:13:23.185 raid1 : 5.85 146.55 9.16 0.00 0.00 753530.61 22401.40 2013265.92 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x100 length 0x100 00:13:23.185 raid1 : 5.86 123.50 7.72 0.00 0.00 973041.73 42896.29 2013265.92 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x0 length 0x4e 00:13:23.185 AIO0 : 5.86 148.98 9.31 0.00 0.00 446790.61 2800.17 1189657.13 00:13:23.185 [2024-11-05T16:52:12.062Z] Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:13:23.185 Verification LBA range: start 0x4e length 0x4e 00:13:23.185 AIO0 : 5.85 113.22 7.08 0.00 0.00 641297.30 22997.18 1189657.13 00:13:23.185 [2024-11-05T16:52:12.062Z] =================================================================================================================== 00:13:23.186 [2024-11-05T16:52:12.063Z] Total : 3910.17 244.39 0.00 0.00 576112.58 2800.17 2043769.95 00:13:24.562 00:13:24.562 real 0m9.490s 00:13:24.562 user 0m17.347s 00:13:24.562 sys 0m0.548s 00:13:24.562 16:52:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:24.562 ************************************ 00:13:24.562 END TEST bdev_verify_big_io 00:13:24.562 ************************************ 00:13:24.562 16:52:13 -- common/autotest_common.sh@10 -- # set +x 00:13:24.821 16:52:13 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:24.821 16:52:13 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:24.821 16:52:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:24.821 16:52:13 -- common/autotest_common.sh@10 -- # set +x 00:13:24.821 ************************************ 00:13:24.821 START TEST bdev_write_zeroes 00:13:24.821 ************************************ 00:13:24.821 16:52:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:24.821 [2024-11-05 16:52:13.535695] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:24.821 [2024-11-05 16:52:13.536553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110580 ] 00:13:24.821 [2024-11-05 16:52:13.705876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.080 [2024-11-05 16:52:13.876831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.338 [2024-11-05 16:52:14.214240] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:25.338 [2024-11-05 16:52:14.214315] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:25.338 [2024-11-05 16:52:14.222212] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:25.338 [2024-11-05 16:52:14.222315] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:25.596 [2024-11-05 16:52:14.230233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:25.596 [2024-11-05 16:52:14.230296] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:25.596 [2024-11-05 16:52:14.230350] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:25.596 [2024-11-05 16:52:14.406795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:25.596 [2024-11-05 16:52:14.406963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.596 [2024-11-05 16:52:14.407023] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:25.596 [2024-11-05 16:52:14.407050] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.597 [2024-11-05 16:52:14.409415] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.597 [2024-11-05 16:52:14.409495] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:26.166 Running I/O for 1 seconds... 00:13:27.103 00:13:27.103 Latency(us) 00:13:27.103 [2024-11-05T16:52:15.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.103 [2024-11-05T16:52:15.980Z] Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.103 Malloc0 : 1.04 5891.90 23.02 0.00 0.00 21708.36 692.60 40036.54 00:13:27.103 [2024-11-05T16:52:15.980Z] Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.103 Malloc1p0 : 1.04 5885.39 22.99 0.00 0.00 21699.69 975.59 39321.60 00:13:27.103 [2024-11-05T16:52:15.980Z] Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.103 Malloc1p1 : 1.05 5879.16 22.97 0.00 0.00 21677.20 804.31 38368.35 00:13:27.103 [2024-11-05T16:52:15.980Z] Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.103 Malloc2p0 : 1.05 5872.90 22.94 0.00 0.00 21662.31 960.70 37415.10 00:13:27.103 [2024-11-05T16:52:15.980Z] Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.103 Malloc2p1 : 1.05 5866.69 22.92 0.00 0.00 21640.59 856.44 36461.85 00:13:27.103 [2024-11-05T16:52:15.980Z] Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.103 Malloc2p2 : 1.05 5860.54 22.89 0.00 0.00 21617.27 953.25 35746.91 00:13:27.103 [2024-11-05T16:52:15.980Z] Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.104 Malloc2p3 : 1.05 5854.38 22.87 0.00 0.00 21596.59 800.58 34793.66 00:13:27.104 [2024-11-05T16:52:15.981Z] Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.104 Malloc2p4 : 1.05 5848.10 22.84 0.00 0.00 21576.61 960.70 33840.41 00:13:27.104 [2024-11-05T16:52:15.981Z] Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.104 Malloc2p5 : 1.05 5841.98 22.82 0.00 0.00 21554.79 860.16 32887.16 00:13:27.104 [2024-11-05T16:52:15.981Z] Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.104 Malloc2p6 : 1.05 5836.24 22.80 0.00 0.00 21532.20 953.25 31695.59 00:13:27.104 [2024-11-05T16:52:15.981Z] Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.104 Malloc2p7 : 1.05 5830.44 22.78 0.00 0.00 21518.67 800.58 30980.65 00:13:27.104 [2024-11-05T16:52:15.981Z] Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.104 TestPT : 1.05 5824.43 22.75 0.00 0.00 21495.58 942.08 29789.09 00:13:27.104 [2024-11-05T16:52:15.981Z] Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.104 raid0 : 1.06 5817.69 22.73 0.00 0.00 21463.69 1467.11 28240.06 00:13:27.104 [2024-11-05T16:52:15.981Z] Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.104 concat0 : 1.06 5811.17 22.70 0.00 0.00 21413.90 1437.32 26691.03 00:13:27.104 [2024-11-05T16:52:15.981Z] Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.104 raid1 : 1.06 5802.94 22.67 0.00 0.00 21362.52 2308.65 24546.21 00:13:27.104 [2024-11-05T16:52:15.981Z] Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.104 AIO0 : 1.06 5788.52 22.61 0.00 0.00 21312.31 1526.69 23116.33 00:13:27.104 [2024-11-05T16:52:15.981Z] =================================================================================================================== 00:13:27.104 [2024-11-05T16:52:15.981Z] Total : 93512.48 365.28 0.00 0.00 21552.04 692.60 40036.54 00:13:29.007 00:13:29.007 real 0m4.088s 00:13:29.007 user 0m3.456s 00:13:29.007 sys 0m0.436s 00:13:29.007 16:52:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:29.007 ************************************ 00:13:29.007 END TEST bdev_write_zeroes 00:13:29.007 ************************************ 00:13:29.007 16:52:17 -- common/autotest_common.sh@10 -- # set +x 00:13:29.007 16:52:17 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:29.007 16:52:17 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:29.007 16:52:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:29.007 16:52:17 -- common/autotest_common.sh@10 -- # set +x 00:13:29.007 ************************************ 00:13:29.007 START TEST bdev_json_nonenclosed 00:13:29.007 ************************************ 00:13:29.007 16:52:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:29.007 [2024-11-05 16:52:17.683544] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:29.007 [2024-11-05 16:52:17.683790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110654 ] 00:13:29.007 [2024-11-05 16:52:17.854431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.266 [2024-11-05 16:52:18.042015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.266 [2024-11-05 16:52:18.042356] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:29.266 [2024-11-05 16:52:18.042396] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:29.834 00:13:29.834 real 0m0.802s 00:13:29.834 user 0m0.558s 00:13:29.834 sys 0m0.144s 00:13:29.834 ************************************ 00:13:29.834 END TEST bdev_json_nonenclosed 00:13:29.834 ************************************ 00:13:29.834 16:52:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:29.834 16:52:18 -- common/autotest_common.sh@10 -- # set +x 00:13:29.834 16:52:18 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:29.834 16:52:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:29.834 16:52:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:29.834 16:52:18 -- common/autotest_common.sh@10 -- # set +x 00:13:29.834 ************************************ 00:13:29.834 START TEST bdev_json_nonarray 00:13:29.834 ************************************ 00:13:29.834 16:52:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:29.834 [2024-11-05 16:52:18.544810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:29.834 [2024-11-05 16:52:18.545060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110692 ] 00:13:29.834 [2024-11-05 16:52:18.718132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.092 [2024-11-05 16:52:18.908299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.092 [2024-11-05 16:52:18.908606] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:30.092 [2024-11-05 16:52:18.908646] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:30.660 00:13:30.660 real 0m0.791s 00:13:30.660 user 0m0.563s 00:13:30.660 sys 0m0.128s 00:13:30.660 16:52:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:30.660 16:52:19 -- common/autotest_common.sh@10 -- # set +x 00:13:30.660 ************************************ 00:13:30.660 END TEST bdev_json_nonarray 00:13:30.660 ************************************ 00:13:30.660 16:52:19 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:13:30.660 16:52:19 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:13:30.660 16:52:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:30.660 16:52:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:30.660 16:52:19 -- common/autotest_common.sh@10 -- # set +x 00:13:30.660 ************************************ 00:13:30.660 START TEST bdev_qos 00:13:30.660 ************************************ 00:13:30.660 16:52:19 -- common/autotest_common.sh@1114 -- # qos_test_suite '' 00:13:30.660 16:52:19 -- bdev/blockdev.sh@444 -- # QOS_PID=110731 00:13:30.660 16:52:19 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:30.660 Process qos testing pid: 110731 00:13:30.660 16:52:19 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 110731' 00:13:30.660 16:52:19 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:30.660 16:52:19 -- bdev/blockdev.sh@447 -- # waitforlisten 110731 00:13:30.660 16:52:19 -- common/autotest_common.sh@829 -- # '[' -z 110731 ']' 00:13:30.660 16:52:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.660 16:52:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.660 16:52:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.660 16:52:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.660 16:52:19 -- common/autotest_common.sh@10 -- # set +x 00:13:30.660 [2024-11-05 16:52:19.388776] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:30.660 [2024-11-05 16:52:19.388996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110731 ] 00:13:30.925 [2024-11-05 16:52:19.567520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.925 [2024-11-05 16:52:19.760910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.494 16:52:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.494 16:52:20 -- common/autotest_common.sh@862 -- # return 0 00:13:31.494 16:52:20 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:31.494 16:52:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.494 16:52:20 -- common/autotest_common.sh@10 -- # set +x 00:13:31.765 Malloc_0 00:13:31.765 16:52:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.765 16:52:20 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:13:31.765 16:52:20 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:13:31.765 16:52:20 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:31.765 16:52:20 -- common/autotest_common.sh@899 -- # local i 00:13:31.765 16:52:20 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:31.765 16:52:20 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:31.765 16:52:20 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:31.765 16:52:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.765 16:52:20 -- common/autotest_common.sh@10 -- # set +x 00:13:31.765 16:52:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.765 16:52:20 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:31.765 16:52:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.765 16:52:20 -- common/autotest_common.sh@10 -- # set +x 00:13:31.765 [ 00:13:31.765 { 00:13:31.765 "name": "Malloc_0", 00:13:31.765 "aliases": [ 00:13:31.765 "9732dea6-3e25-4f26-9247-9ab1668fb2dc" 00:13:31.765 ], 00:13:31.765 "product_name": "Malloc disk", 00:13:31.765 "block_size": 512, 00:13:31.765 "num_blocks": 262144, 00:13:31.765 "uuid": "9732dea6-3e25-4f26-9247-9ab1668fb2dc", 00:13:31.765 "assigned_rate_limits": { 00:13:31.765 "rw_ios_per_sec": 0, 00:13:31.765 "rw_mbytes_per_sec": 0, 00:13:31.765 "r_mbytes_per_sec": 0, 00:13:31.765 "w_mbytes_per_sec": 0 00:13:31.765 }, 00:13:31.765 "claimed": false, 00:13:31.765 "zoned": false, 00:13:31.765 "supported_io_types": { 00:13:31.765 "read": true, 00:13:31.765 "write": true, 00:13:31.765 "unmap": true, 00:13:31.765 "write_zeroes": true, 00:13:31.765 "flush": true, 00:13:31.765 "reset": true, 00:13:31.765 "compare": false, 00:13:31.765 "compare_and_write": false, 00:13:31.765 "abort": true, 00:13:31.765 "nvme_admin": false, 00:13:31.765 "nvme_io": false 00:13:31.765 }, 00:13:31.765 "memory_domains": [ 00:13:31.765 { 00:13:31.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.765 "dma_device_type": 2 00:13:31.765 } 00:13:31.765 ], 00:13:31.765 "driver_specific": {} 00:13:31.765 } 00:13:31.765 ] 00:13:31.765 16:52:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.765 16:52:20 -- common/autotest_common.sh@905 -- # return 0 00:13:31.765 16:52:20 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:31.765 16:52:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.765 16:52:20 -- common/autotest_common.sh@10 -- # set +x 00:13:31.765 Null_1 00:13:31.765 16:52:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.765 16:52:20 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:13:31.765 16:52:20 -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:13:31.765 16:52:20 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:31.765 16:52:20 -- common/autotest_common.sh@899 -- # local i 00:13:31.765 16:52:20 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:31.765 16:52:20 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:31.765 16:52:20 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:31.765 16:52:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.765 16:52:20 -- common/autotest_common.sh@10 -- # set +x 00:13:31.765 16:52:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.765 16:52:20 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:31.765 16:52:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.765 16:52:20 -- common/autotest_common.sh@10 -- # set +x 00:13:31.765 [ 00:13:31.765 { 00:13:31.765 "name": "Null_1", 00:13:31.765 "aliases": [ 00:13:31.765 "c86e9994-f425-477a-918e-d9b4a15552ae" 00:13:31.765 ], 00:13:31.765 "product_name": "Null disk", 00:13:31.765 "block_size": 512, 00:13:31.765 "num_blocks": 262144, 00:13:31.765 "uuid": "c86e9994-f425-477a-918e-d9b4a15552ae", 00:13:31.765 "assigned_rate_limits": { 00:13:31.765 "rw_ios_per_sec": 0, 00:13:31.765 "rw_mbytes_per_sec": 0, 00:13:31.765 "r_mbytes_per_sec": 0, 00:13:31.765 "w_mbytes_per_sec": 0 00:13:31.765 }, 00:13:31.765 "claimed": false, 00:13:31.765 "zoned": false, 00:13:31.765 "supported_io_types": { 00:13:31.765 "read": true, 00:13:31.765 "write": true, 00:13:31.765 "unmap": false, 00:13:31.765 "write_zeroes": true, 00:13:31.765 "flush": false, 00:13:31.765 "reset": true, 00:13:31.765 "compare": false, 00:13:31.765 "compare_and_write": false, 00:13:31.765 "abort": true, 00:13:31.765 "nvme_admin": false, 00:13:31.765 "nvme_io": false 00:13:31.765 }, 00:13:31.765 "driver_specific": {} 00:13:31.765 } 00:13:31.765 ] 00:13:31.765 16:52:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.765 16:52:20 -- common/autotest_common.sh@905 -- # return 0 00:13:31.765 16:52:20 -- bdev/blockdev.sh@455 -- # qos_function_test 00:13:31.765 16:52:20 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:31.765 16:52:20 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:13:31.765 16:52:20 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:13:31.765 16:52:20 -- bdev/blockdev.sh@410 -- # local io_result=0 00:13:31.765 16:52:20 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:13:31.765 16:52:20 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:13:31.765 16:52:20 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:13:31.765 16:52:20 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:31.765 16:52:20 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:31.765 16:52:20 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:31.765 16:52:20 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:31.765 16:52:20 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:31.765 16:52:20 -- bdev/blockdev.sh@376 -- # tail -1 00:13:32.038 Running I/O for 60 seconds... 00:13:37.310 16:52:25 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 77865.12 311460.50 0.00 0.00 315392.00 0.00 0.00 ' 00:13:37.310 16:52:25 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:37.310 16:52:25 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:37.310 16:52:25 -- bdev/blockdev.sh@378 -- # iostat_result=77865.12 00:13:37.310 16:52:25 -- bdev/blockdev.sh@383 -- # echo 77865 00:13:37.310 16:52:25 -- bdev/blockdev.sh@414 -- # io_result=77865 00:13:37.310 16:52:25 -- bdev/blockdev.sh@416 -- # iops_limit=19000 00:13:37.310 16:52:25 -- bdev/blockdev.sh@417 -- # '[' 19000 -gt 1000 ']' 00:13:37.310 16:52:25 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 19000 Malloc_0 00:13:37.310 16:52:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.310 16:52:25 -- common/autotest_common.sh@10 -- # set +x 00:13:37.310 16:52:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.310 16:52:25 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 19000 IOPS Malloc_0 00:13:37.310 16:52:25 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:37.310 16:52:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:37.310 16:52:25 -- common/autotest_common.sh@10 -- # set +x 00:13:37.310 ************************************ 00:13:37.310 START TEST bdev_qos_iops 00:13:37.310 ************************************ 00:13:37.310 16:52:25 -- common/autotest_common.sh@1114 -- # run_qos_test 19000 IOPS Malloc_0 00:13:37.310 16:52:25 -- bdev/blockdev.sh@387 -- # local qos_limit=19000 00:13:37.310 16:52:25 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:37.310 16:52:25 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:13:37.310 16:52:25 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:37.310 16:52:25 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:37.310 16:52:25 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:37.310 16:52:25 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:37.310 16:52:25 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:37.310 16:52:25 -- bdev/blockdev.sh@376 -- # tail -1 00:13:42.582 16:52:30 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 19010.03 76040.11 0.00 0.00 77672.00 0.00 0.00 ' 00:13:42.582 16:52:30 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:42.582 16:52:30 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:42.582 16:52:30 -- bdev/blockdev.sh@378 -- # iostat_result=19010.03 00:13:42.582 16:52:30 -- bdev/blockdev.sh@383 -- # echo 19010 00:13:42.582 16:52:30 -- bdev/blockdev.sh@390 -- # qos_result=19010 00:13:42.582 16:52:30 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:13:42.582 16:52:30 -- bdev/blockdev.sh@394 -- # lower_limit=17100 00:13:42.582 16:52:30 -- bdev/blockdev.sh@395 -- # upper_limit=20900 00:13:42.582 16:52:30 -- bdev/blockdev.sh@398 -- # '[' 19010 -lt 17100 ']' 00:13:42.582 16:52:30 -- bdev/blockdev.sh@398 -- # '[' 19010 -gt 20900 ']' 00:13:42.582 00:13:42.582 real 0m5.219s 00:13:42.582 user 0m0.103s 00:13:42.582 sys 0m0.044s 00:13:42.582 16:52:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:42.582 16:52:30 -- common/autotest_common.sh@10 -- # set +x 00:13:42.582 ************************************ 00:13:42.582 END TEST bdev_qos_iops 00:13:42.582 ************************************ 00:13:42.582 16:52:31 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:13:42.582 16:52:31 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:42.582 16:52:31 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:42.582 16:52:31 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:42.582 16:52:31 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:42.582 16:52:31 -- bdev/blockdev.sh@376 -- # tail -1 00:13:42.582 16:52:31 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:47.854 16:52:36 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 26684.94 106739.75 0.00 0.00 108544.00 0.00 0.00 ' 00:13:47.854 16:52:36 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:47.854 16:52:36 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:47.854 16:52:36 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:47.854 16:52:36 -- bdev/blockdev.sh@380 -- # iostat_result=108544.00 00:13:47.854 16:52:36 -- bdev/blockdev.sh@383 -- # echo 108544 00:13:47.854 16:52:36 -- bdev/blockdev.sh@425 -- # bw_limit=108544 00:13:47.854 16:52:36 -- bdev/blockdev.sh@426 -- # bw_limit=10 00:13:47.854 16:52:36 -- bdev/blockdev.sh@427 -- # '[' 10 -lt 2 ']' 00:13:47.854 16:52:36 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 10 Null_1 00:13:47.854 16:52:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.854 16:52:36 -- common/autotest_common.sh@10 -- # set +x 00:13:47.854 16:52:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.854 16:52:36 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 10 BANDWIDTH Null_1 00:13:47.854 16:52:36 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:47.854 16:52:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:47.854 16:52:36 -- common/autotest_common.sh@10 -- # set +x 00:13:47.854 ************************************ 00:13:47.854 START TEST bdev_qos_bw 00:13:47.854 ************************************ 00:13:47.854 16:52:36 -- common/autotest_common.sh@1114 -- # run_qos_test 10 BANDWIDTH Null_1 00:13:47.854 16:52:36 -- bdev/blockdev.sh@387 -- # local qos_limit=10 00:13:47.854 16:52:36 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:47.854 16:52:36 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:13:47.854 16:52:36 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:47.854 16:52:36 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:47.854 16:52:36 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:47.854 16:52:36 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:47.854 16:52:36 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:47.854 16:52:36 -- bdev/blockdev.sh@376 -- # tail -1 00:13:53.159 16:52:41 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 2557.15 10228.60 0.00 0.00 10392.00 0.00 0.00 ' 00:13:53.159 16:52:41 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:53.159 16:52:41 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:53.159 16:52:41 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:53.159 16:52:41 -- bdev/blockdev.sh@380 -- # iostat_result=10392.00 00:13:53.159 16:52:41 -- bdev/blockdev.sh@383 -- # echo 10392 00:13:53.159 16:52:41 -- bdev/blockdev.sh@390 -- # qos_result=10392 00:13:53.159 16:52:41 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:53.159 16:52:41 -- bdev/blockdev.sh@392 -- # qos_limit=10240 00:13:53.159 16:52:41 -- bdev/blockdev.sh@394 -- # lower_limit=9216 00:13:53.159 16:52:41 -- bdev/blockdev.sh@395 -- # upper_limit=11264 00:13:53.159 16:52:41 -- bdev/blockdev.sh@398 -- # '[' 10392 -lt 9216 ']' 00:13:53.159 16:52:41 -- bdev/blockdev.sh@398 -- # '[' 10392 -gt 11264 ']' 00:13:53.159 00:13:53.159 real 0m5.248s 00:13:53.159 user 0m0.130s 00:13:53.159 sys 0m0.020s 00:13:53.159 16:52:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:53.159 16:52:41 -- common/autotest_common.sh@10 -- # set +x 00:13:53.159 ************************************ 00:13:53.159 END TEST bdev_qos_bw 00:13:53.159 ************************************ 00:13:53.159 16:52:41 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:13:53.159 16:52:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.159 16:52:41 -- common/autotest_common.sh@10 -- # set +x 00:13:53.159 16:52:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.159 16:52:41 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:13:53.159 16:52:41 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:53.159 16:52:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:53.159 16:52:41 -- common/autotest_common.sh@10 -- # set +x 00:13:53.159 ************************************ 00:13:53.159 START TEST bdev_qos_ro_bw 00:13:53.159 ************************************ 00:13:53.159 16:52:41 -- common/autotest_common.sh@1114 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:13:53.159 16:52:41 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:13:53.159 16:52:41 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:53.159 16:52:41 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:13:53.159 16:52:41 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:53.159 16:52:41 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:53.159 16:52:41 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:53.159 16:52:41 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:53.159 16:52:41 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:53.159 16:52:41 -- bdev/blockdev.sh@376 -- # tail -1 00:13:58.476 16:52:46 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.89 2047.55 0.00 0.00 2068.00 0.00 0.00 ' 00:13:58.476 16:52:46 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:58.476 16:52:46 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:58.476 16:52:46 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:58.476 16:52:46 -- bdev/blockdev.sh@380 -- # iostat_result=2068.00 00:13:58.476 16:52:46 -- bdev/blockdev.sh@383 -- # echo 2068 00:13:58.476 16:52:46 -- bdev/blockdev.sh@390 -- # qos_result=2068 00:13:58.476 16:52:46 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:58.476 16:52:46 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:13:58.476 16:52:46 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:13:58.476 16:52:46 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:13:58.476 16:52:46 -- bdev/blockdev.sh@398 -- # '[' 2068 -lt 1843 ']' 00:13:58.476 16:52:46 -- bdev/blockdev.sh@398 -- # '[' 2068 -gt 2252 ']' 00:13:58.476 00:13:58.476 real 0m5.172s 00:13:58.476 user 0m0.125s 00:13:58.476 sys 0m0.018s 00:13:58.476 16:52:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:58.476 16:52:46 -- common/autotest_common.sh@10 -- # set +x 00:13:58.476 ************************************ 00:13:58.476 END TEST bdev_qos_ro_bw 00:13:58.476 ************************************ 00:13:58.476 16:52:46 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:58.476 16:52:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.476 16:52:46 -- common/autotest_common.sh@10 -- # set +x 00:13:58.735 16:52:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.735 16:52:47 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:13:58.735 16:52:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.735 16:52:47 -- common/autotest_common.sh@10 -- # set +x 00:13:58.735 00:13:58.735 Latency(us) 00:13:58.735 [2024-11-05T16:52:47.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.735 [2024-11-05T16:52:47.612Z] Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:58.735 Malloc_0 : 26.67 26032.65 101.69 0.00 0.00 9743.64 2040.55 503316.48 00:13:58.735 [2024-11-05T16:52:47.612Z] Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:58.735 Null_1 : 26.87 25735.17 100.53 0.00 0.00 9927.78 614.40 198276.19 00:13:58.735 [2024-11-05T16:52:47.612Z] =================================================================================================================== 00:13:58.735 [2024-11-05T16:52:47.612Z] Total : 51767.82 202.22 0.00 0.00 9835.53 614.40 503316.48 00:13:58.735 0 00:13:58.735 16:52:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.735 16:52:47 -- bdev/blockdev.sh@459 -- # killprocess 110731 00:13:58.735 16:52:47 -- common/autotest_common.sh@936 -- # '[' -z 110731 ']' 00:13:58.735 16:52:47 -- common/autotest_common.sh@940 -- # kill -0 110731 00:13:58.735 16:52:47 -- common/autotest_common.sh@941 -- # uname 00:13:58.735 16:52:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:58.735 16:52:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110731 00:13:58.735 16:52:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:58.735 16:52:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:58.735 16:52:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110731' 00:13:58.735 killing process with pid 110731 00:13:58.735 16:52:47 -- common/autotest_common.sh@955 -- # kill 110731 00:13:58.735 Received shutdown signal, test time was about 26.903636 seconds 00:13:58.735 00:13:58.735 Latency(us) 00:13:58.735 [2024-11-05T16:52:47.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.735 [2024-11-05T16:52:47.612Z] =================================================================================================================== 00:13:58.735 [2024-11-05T16:52:47.612Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:58.735 16:52:47 -- common/autotest_common.sh@960 -- # wait 110731 00:14:00.112 16:52:48 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:14:00.112 00:14:00.112 real 0m29.456s 00:14:00.112 user 0m30.296s 00:14:00.112 sys 0m0.597s 00:14:00.112 16:52:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:00.112 16:52:48 -- common/autotest_common.sh@10 -- # set +x 00:14:00.112 ************************************ 00:14:00.112 END TEST bdev_qos 00:14:00.112 ************************************ 00:14:00.112 16:52:48 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:14:00.112 16:52:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:00.112 16:52:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:00.112 16:52:48 -- common/autotest_common.sh@10 -- # set +x 00:14:00.112 ************************************ 00:14:00.113 START TEST bdev_qd_sampling 00:14:00.113 ************************************ 00:14:00.113 16:52:48 -- common/autotest_common.sh@1114 -- # qd_sampling_test_suite '' 00:14:00.113 16:52:48 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:14:00.113 16:52:48 -- bdev/blockdev.sh@539 -- # QD_PID=111200 00:14:00.113 16:52:48 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 111200' 00:14:00.113 Process bdev QD sampling period testing pid: 111200 00:14:00.113 16:52:48 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:14:00.113 16:52:48 -- bdev/blockdev.sh@542 -- # waitforlisten 111200 00:14:00.113 16:52:48 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:14:00.113 16:52:48 -- common/autotest_common.sh@829 -- # '[' -z 111200 ']' 00:14:00.113 16:52:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.113 16:52:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.113 16:52:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.113 16:52:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.113 16:52:48 -- common/autotest_common.sh@10 -- # set +x 00:14:00.113 [2024-11-05 16:52:48.891446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:00.113 [2024-11-05 16:52:48.891826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111200 ] 00:14:00.371 [2024-11-05 16:52:49.060281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:00.630 [2024-11-05 16:52:49.275371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.630 [2024-11-05 16:52:49.275382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.197 16:52:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.197 16:52:49 -- common/autotest_common.sh@862 -- # return 0 00:14:01.197 16:52:49 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:14:01.197 16:52:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.197 16:52:49 -- common/autotest_common.sh@10 -- # set +x 00:14:01.197 Malloc_QD 00:14:01.197 16:52:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.197 16:52:50 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:14:01.197 16:52:50 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:14:01.197 16:52:50 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:01.197 16:52:50 -- common/autotest_common.sh@899 -- # local i 00:14:01.197 16:52:50 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:01.198 16:52:50 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:01.198 16:52:50 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:01.198 16:52:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.198 16:52:50 -- common/autotest_common.sh@10 -- # set +x 00:14:01.198 16:52:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.198 16:52:50 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:14:01.198 16:52:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.198 16:52:50 -- common/autotest_common.sh@10 -- # set +x 00:14:01.198 [ 00:14:01.198 { 00:14:01.198 "name": "Malloc_QD", 00:14:01.198 "aliases": [ 00:14:01.198 "1c6b9a83-cdca-4a64-b8c0-2f224da1b849" 00:14:01.198 ], 00:14:01.198 "product_name": "Malloc disk", 00:14:01.198 "block_size": 512, 00:14:01.198 "num_blocks": 262144, 00:14:01.198 "uuid": "1c6b9a83-cdca-4a64-b8c0-2f224da1b849", 00:14:01.198 "assigned_rate_limits": { 00:14:01.198 "rw_ios_per_sec": 0, 00:14:01.198 "rw_mbytes_per_sec": 0, 00:14:01.198 "r_mbytes_per_sec": 0, 00:14:01.198 "w_mbytes_per_sec": 0 00:14:01.198 }, 00:14:01.198 "claimed": false, 00:14:01.198 "zoned": false, 00:14:01.198 "supported_io_types": { 00:14:01.198 "read": true, 00:14:01.198 "write": true, 00:14:01.198 "unmap": true, 00:14:01.198 "write_zeroes": true, 00:14:01.198 "flush": true, 00:14:01.198 "reset": true, 00:14:01.198 "compare": false, 00:14:01.198 "compare_and_write": false, 00:14:01.198 "abort": true, 00:14:01.198 "nvme_admin": false, 00:14:01.198 "nvme_io": false 00:14:01.198 }, 00:14:01.198 "memory_domains": [ 00:14:01.198 { 00:14:01.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.198 "dma_device_type": 2 00:14:01.198 } 00:14:01.198 ], 00:14:01.198 "driver_specific": {} 00:14:01.198 } 00:14:01.198 ] 00:14:01.198 16:52:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.198 16:52:50 -- common/autotest_common.sh@905 -- # return 0 00:14:01.198 16:52:50 -- bdev/blockdev.sh@548 -- # sleep 2 00:14:01.198 16:52:50 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:01.457 Running I/O for 5 seconds... 00:14:03.388 16:52:52 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:14:03.388 16:52:52 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:14:03.388 16:52:52 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:14:03.388 16:52:52 -- bdev/blockdev.sh@519 -- # local iostats 00:14:03.388 16:52:52 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:14:03.388 16:52:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.388 16:52:52 -- common/autotest_common.sh@10 -- # set +x 00:14:03.388 16:52:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.388 16:52:52 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:14:03.388 16:52:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.388 16:52:52 -- common/autotest_common.sh@10 -- # set +x 00:14:03.388 16:52:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.388 16:52:52 -- bdev/blockdev.sh@523 -- # iostats='{ 00:14:03.388 "tick_rate": 2200000000, 00:14:03.388 "ticks": 1669674281174, 00:14:03.388 "bdevs": [ 00:14:03.388 { 00:14:03.388 "name": "Malloc_QD", 00:14:03.388 "bytes_read": 924881408, 00:14:03.388 "num_read_ops": 225795, 00:14:03.388 "bytes_written": 0, 00:14:03.388 "num_write_ops": 0, 00:14:03.388 "bytes_unmapped": 0, 00:14:03.388 "num_unmap_ops": 0, 00:14:03.388 "bytes_copied": 0, 00:14:03.388 "num_copy_ops": 0, 00:14:03.388 "read_latency_ticks": 2142865168187, 00:14:03.388 "max_read_latency_ticks": 11943222, 00:14:03.388 "min_read_latency_ticks": 306801, 00:14:03.388 "write_latency_ticks": 0, 00:14:03.388 "max_write_latency_ticks": 0, 00:14:03.388 "min_write_latency_ticks": 0, 00:14:03.388 "unmap_latency_ticks": 0, 00:14:03.388 "max_unmap_latency_ticks": 0, 00:14:03.388 "min_unmap_latency_ticks": 0, 00:14:03.388 "copy_latency_ticks": 0, 00:14:03.388 "max_copy_latency_ticks": 0, 00:14:03.388 "min_copy_latency_ticks": 0, 00:14:03.388 "io_error": {}, 00:14:03.388 "queue_depth_polling_period": 10, 00:14:03.388 "queue_depth": 512, 00:14:03.388 "io_time": 20, 00:14:03.388 "weighted_io_time": 10240 00:14:03.388 } 00:14:03.388 ] 00:14:03.388 }' 00:14:03.388 16:52:52 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:14:03.388 16:52:52 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:14:03.388 16:52:52 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:14:03.388 16:52:52 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:14:03.388 16:52:52 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:14:03.388 16:52:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.388 16:52:52 -- common/autotest_common.sh@10 -- # set +x 00:14:03.388 00:14:03.388 Latency(us) 00:14:03.388 [2024-11-05T16:52:52.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.388 [2024-11-05T16:52:52.265Z] Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:03.388 Malloc_QD : 1.98 58939.37 230.23 0.00 0.00 4333.36 1079.85 5868.45 00:14:03.388 [2024-11-05T16:52:52.265Z] Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:03.388 Malloc_QD : 1.99 59265.22 231.50 0.00 0.00 4309.21 744.73 5451.40 00:14:03.388 [2024-11-05T16:52:52.265Z] =================================================================================================================== 00:14:03.388 [2024-11-05T16:52:52.265Z] Total : 118204.60 461.74 0.00 0.00 4321.24 744.73 5868.45 00:14:03.388 0 00:14:03.388 16:52:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.388 16:52:52 -- bdev/blockdev.sh@552 -- # killprocess 111200 00:14:03.388 16:52:52 -- common/autotest_common.sh@936 -- # '[' -z 111200 ']' 00:14:03.388 16:52:52 -- common/autotest_common.sh@940 -- # kill -0 111200 00:14:03.388 16:52:52 -- common/autotest_common.sh@941 -- # uname 00:14:03.388 16:52:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:03.388 16:52:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111200 00:14:03.388 16:52:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:03.388 16:52:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:03.388 16:52:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111200' 00:14:03.388 killing process with pid 111200 00:14:03.388 16:52:52 -- common/autotest_common.sh@955 -- # kill 111200 00:14:03.388 Received shutdown signal, test time was about 2.118621 seconds 00:14:03.388 00:14:03.388 Latency(us) 00:14:03.388 [2024-11-05T16:52:52.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.388 [2024-11-05T16:52:52.265Z] =================================================================================================================== 00:14:03.388 [2024-11-05T16:52:52.265Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:03.388 16:52:52 -- common/autotest_common.sh@960 -- # wait 111200 00:14:04.767 16:52:53 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:14:04.767 00:14:04.767 real 0m4.630s 00:14:04.767 user 0m8.729s 00:14:04.767 sys 0m0.364s 00:14:04.767 16:52:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:04.767 ************************************ 00:14:04.767 END TEST bdev_qd_sampling 00:14:04.767 16:52:53 -- common/autotest_common.sh@10 -- # set +x 00:14:04.767 ************************************ 00:14:04.767 16:52:53 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:14:04.767 16:52:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:04.767 16:52:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:04.767 16:52:53 -- common/autotest_common.sh@10 -- # set +x 00:14:04.767 ************************************ 00:14:04.767 START TEST bdev_error 00:14:04.767 ************************************ 00:14:04.767 16:52:53 -- common/autotest_common.sh@1114 -- # error_test_suite '' 00:14:04.767 16:52:53 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:14:04.767 16:52:53 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:14:04.767 16:52:53 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:14:04.767 16:52:53 -- bdev/blockdev.sh@470 -- # ERR_PID=111296 00:14:04.767 Process error testing pid: 111296 00:14:04.767 16:52:53 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 111296' 00:14:04.767 16:52:53 -- bdev/blockdev.sh@472 -- # waitforlisten 111296 00:14:04.767 16:52:53 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:14:04.767 16:52:53 -- common/autotest_common.sh@829 -- # '[' -z 111296 ']' 00:14:04.767 16:52:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.767 16:52:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.767 16:52:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.767 16:52:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.767 16:52:53 -- common/autotest_common.sh@10 -- # set +x 00:14:04.767 [2024-11-05 16:52:53.594447] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:04.767 [2024-11-05 16:52:53.594646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111296 ] 00:14:05.026 [2024-11-05 16:52:53.763738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.284 [2024-11-05 16:52:53.941593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.854 16:52:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.854 16:52:54 -- common/autotest_common.sh@862 -- # return 0 00:14:05.854 16:52:54 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:05.854 16:52:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.854 16:52:54 -- common/autotest_common.sh@10 -- # set +x 00:14:05.854 Dev_1 00:14:05.854 16:52:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.854 16:52:54 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:14:05.854 16:52:54 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:14:05.854 16:52:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:05.854 16:52:54 -- common/autotest_common.sh@899 -- # local i 00:14:05.854 16:52:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:05.854 16:52:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:05.854 16:52:54 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:05.854 16:52:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.854 16:52:54 -- common/autotest_common.sh@10 -- # set +x 00:14:05.854 16:52:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.854 16:52:54 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:05.854 16:52:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.854 16:52:54 -- common/autotest_common.sh@10 -- # set +x 00:14:05.854 [ 00:14:05.854 { 00:14:05.854 "name": "Dev_1", 00:14:05.854 "aliases": [ 00:14:05.854 "0e3a9b5a-f5dd-42e5-b7d3-1f4efa32f4fe" 00:14:05.854 ], 00:14:05.854 "product_name": "Malloc disk", 00:14:05.854 "block_size": 512, 00:14:05.854 "num_blocks": 262144, 00:14:05.854 "uuid": "0e3a9b5a-f5dd-42e5-b7d3-1f4efa32f4fe", 00:14:05.854 "assigned_rate_limits": { 00:14:05.854 "rw_ios_per_sec": 0, 00:14:05.854 "rw_mbytes_per_sec": 0, 00:14:05.854 "r_mbytes_per_sec": 0, 00:14:05.854 "w_mbytes_per_sec": 0 00:14:05.854 }, 00:14:05.854 "claimed": false, 00:14:05.854 "zoned": false, 00:14:05.854 "supported_io_types": { 00:14:05.854 "read": true, 00:14:05.854 "write": true, 00:14:05.854 "unmap": true, 00:14:05.854 "write_zeroes": true, 00:14:05.854 "flush": true, 00:14:05.854 "reset": true, 00:14:05.854 "compare": false, 00:14:05.854 "compare_and_write": false, 00:14:05.854 "abort": true, 00:14:05.854 "nvme_admin": false, 00:14:05.854 "nvme_io": false 00:14:05.854 }, 00:14:05.854 "memory_domains": [ 00:14:05.854 { 00:14:05.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.854 "dma_device_type": 2 00:14:05.854 } 00:14:05.854 ], 00:14:05.854 "driver_specific": {} 00:14:05.854 } 00:14:05.854 ] 00:14:05.854 16:52:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.854 16:52:54 -- common/autotest_common.sh@905 -- # return 0 00:14:05.854 16:52:54 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:14:05.854 16:52:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.854 16:52:54 -- common/autotest_common.sh@10 -- # set +x 00:14:05.854 true 00:14:05.854 16:52:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.854 16:52:54 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:05.854 16:52:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.854 16:52:54 -- common/autotest_common.sh@10 -- # set +x 00:14:06.114 Dev_2 00:14:06.114 16:52:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.114 16:52:54 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:14:06.114 16:52:54 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:14:06.114 16:52:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:06.114 16:52:54 -- common/autotest_common.sh@899 -- # local i 00:14:06.114 16:52:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:06.114 16:52:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:06.114 16:52:54 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:06.114 16:52:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.114 16:52:54 -- common/autotest_common.sh@10 -- # set +x 00:14:06.114 16:52:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.114 16:52:54 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:06.114 16:52:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.114 16:52:54 -- common/autotest_common.sh@10 -- # set +x 00:14:06.114 [ 00:14:06.114 { 00:14:06.114 "name": "Dev_2", 00:14:06.114 "aliases": [ 00:14:06.114 "c06b9114-9ce7-4e07-9f32-131b996a1b92" 00:14:06.114 ], 00:14:06.114 "product_name": "Malloc disk", 00:14:06.114 "block_size": 512, 00:14:06.114 "num_blocks": 262144, 00:14:06.114 "uuid": "c06b9114-9ce7-4e07-9f32-131b996a1b92", 00:14:06.114 "assigned_rate_limits": { 00:14:06.114 "rw_ios_per_sec": 0, 00:14:06.114 "rw_mbytes_per_sec": 0, 00:14:06.114 "r_mbytes_per_sec": 0, 00:14:06.114 "w_mbytes_per_sec": 0 00:14:06.114 }, 00:14:06.114 "claimed": false, 00:14:06.114 "zoned": false, 00:14:06.114 "supported_io_types": { 00:14:06.114 "read": true, 00:14:06.114 "write": true, 00:14:06.114 "unmap": true, 00:14:06.114 "write_zeroes": true, 00:14:06.114 "flush": true, 00:14:06.114 "reset": true, 00:14:06.114 "compare": false, 00:14:06.114 "compare_and_write": false, 00:14:06.114 "abort": true, 00:14:06.114 "nvme_admin": false, 00:14:06.114 "nvme_io": false 00:14:06.114 }, 00:14:06.114 "memory_domains": [ 00:14:06.114 { 00:14:06.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.114 "dma_device_type": 2 00:14:06.114 } 00:14:06.114 ], 00:14:06.114 "driver_specific": {} 00:14:06.114 } 00:14:06.114 ] 00:14:06.114 16:52:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.114 16:52:54 -- common/autotest_common.sh@905 -- # return 0 00:14:06.114 16:52:54 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:06.114 16:52:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.114 16:52:54 -- common/autotest_common.sh@10 -- # set +x 00:14:06.114 16:52:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.114 16:52:54 -- bdev/blockdev.sh@482 -- # sleep 1 00:14:06.114 16:52:54 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:06.114 Running I/O for 5 seconds... 00:14:07.050 Process is existed as continue on error is set. Pid: 111296 00:14:07.050 16:52:55 -- bdev/blockdev.sh@485 -- # kill -0 111296 00:14:07.050 16:52:55 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 111296' 00:14:07.050 16:52:55 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:14:07.050 16:52:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.050 16:52:55 -- common/autotest_common.sh@10 -- # set +x 00:14:07.050 16:52:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.050 16:52:55 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:14:07.050 16:52:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.050 16:52:55 -- common/autotest_common.sh@10 -- # set +x 00:14:07.309 Timeout while waiting for response: 00:14:07.309 00:14:07.309 00:14:07.309 16:52:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.309 16:52:56 -- bdev/blockdev.sh@495 -- # sleep 5 00:14:11.499 00:14:11.499 Latency(us) 00:14:11.499 [2024-11-05T16:53:00.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.499 [2024-11-05T16:53:00.376Z] Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:11.499 EE_Dev_1 : 0.90 43490.68 169.89 5.58 0.00 365.19 133.12 655.36 00:14:11.499 [2024-11-05T16:53:00.376Z] Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:11.499 Dev_2 : 5.00 89808.84 350.82 0.00 0.00 175.55 62.37 280255.77 00:14:11.499 [2024-11-05T16:53:00.376Z] =================================================================================================================== 00:14:11.499 [2024-11-05T16:53:00.376Z] Total : 133299.52 520.70 5.58 0.00 190.69 62.37 280255.77 00:14:12.437 16:53:01 -- bdev/blockdev.sh@497 -- # killprocess 111296 00:14:12.437 16:53:01 -- common/autotest_common.sh@936 -- # '[' -z 111296 ']' 00:14:12.437 16:53:01 -- common/autotest_common.sh@940 -- # kill -0 111296 00:14:12.437 16:53:01 -- common/autotest_common.sh@941 -- # uname 00:14:12.437 16:53:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:12.437 16:53:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111296 00:14:12.437 killing process with pid 111296 00:14:12.437 Received shutdown signal, test time was about 5.000000 seconds 00:14:12.437 00:14:12.437 Latency(us) 00:14:12.437 [2024-11-05T16:53:01.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.437 [2024-11-05T16:53:01.314Z] =================================================================================================================== 00:14:12.437 [2024-11-05T16:53:01.314Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:12.437 16:53:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:12.437 16:53:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:12.437 16:53:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111296' 00:14:12.437 16:53:01 -- common/autotest_common.sh@955 -- # kill 111296 00:14:12.437 16:53:01 -- common/autotest_common.sh@960 -- # wait 111296 00:14:13.814 Process error testing pid: 111416 00:14:13.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.814 16:53:02 -- bdev/blockdev.sh@501 -- # ERR_PID=111416 00:14:13.814 16:53:02 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:14:13.814 16:53:02 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 111416' 00:14:13.814 16:53:02 -- bdev/blockdev.sh@503 -- # waitforlisten 111416 00:14:13.814 16:53:02 -- common/autotest_common.sh@829 -- # '[' -z 111416 ']' 00:14:13.814 16:53:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.814 16:53:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.814 16:53:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.814 16:53:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.814 16:53:02 -- common/autotest_common.sh@10 -- # set +x 00:14:13.814 [2024-11-05 16:53:02.491415] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:13.814 [2024-11-05 16:53:02.492598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111416 ] 00:14:13.814 [2024-11-05 16:53:02.655547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.073 [2024-11-05 16:53:02.823839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.641 16:53:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.641 16:53:03 -- common/autotest_common.sh@862 -- # return 0 00:14:14.641 16:53:03 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:14.641 16:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.641 16:53:03 -- common/autotest_common.sh@10 -- # set +x 00:14:14.900 Dev_1 00:14:14.900 16:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.900 16:53:03 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:14:14.900 16:53:03 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:14:14.900 16:53:03 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:14.900 16:53:03 -- common/autotest_common.sh@899 -- # local i 00:14:14.900 16:53:03 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:14.900 16:53:03 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:14.900 16:53:03 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:14.900 16:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.900 16:53:03 -- common/autotest_common.sh@10 -- # set +x 00:14:14.900 16:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.900 16:53:03 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:14.900 16:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.900 16:53:03 -- common/autotest_common.sh@10 -- # set +x 00:14:14.900 [ 00:14:14.900 { 00:14:14.900 "name": "Dev_1", 00:14:14.900 "aliases": [ 00:14:14.900 "84f91ca4-2937-4f95-9814-6e9ee05dca21" 00:14:14.900 ], 00:14:14.900 "product_name": "Malloc disk", 00:14:14.900 "block_size": 512, 00:14:14.900 "num_blocks": 262144, 00:14:14.900 "uuid": "84f91ca4-2937-4f95-9814-6e9ee05dca21", 00:14:14.900 "assigned_rate_limits": { 00:14:14.900 "rw_ios_per_sec": 0, 00:14:14.900 "rw_mbytes_per_sec": 0, 00:14:14.900 "r_mbytes_per_sec": 0, 00:14:14.900 "w_mbytes_per_sec": 0 00:14:14.900 }, 00:14:14.900 "claimed": false, 00:14:14.900 "zoned": false, 00:14:14.900 "supported_io_types": { 00:14:14.900 "read": true, 00:14:14.901 "write": true, 00:14:14.901 "unmap": true, 00:14:14.901 "write_zeroes": true, 00:14:14.901 "flush": true, 00:14:14.901 "reset": true, 00:14:14.901 "compare": false, 00:14:14.901 "compare_and_write": false, 00:14:14.901 "abort": true, 00:14:14.901 "nvme_admin": false, 00:14:14.901 "nvme_io": false 00:14:14.901 }, 00:14:14.901 "memory_domains": [ 00:14:14.901 { 00:14:14.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.901 "dma_device_type": 2 00:14:14.901 } 00:14:14.901 ], 00:14:14.901 "driver_specific": {} 00:14:14.901 } 00:14:14.901 ] 00:14:14.901 16:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.901 16:53:03 -- common/autotest_common.sh@905 -- # return 0 00:14:14.901 16:53:03 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:14:14.901 16:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.901 16:53:03 -- common/autotest_common.sh@10 -- # set +x 00:14:14.901 true 00:14:14.901 16:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.901 16:53:03 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:14.901 16:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.901 16:53:03 -- common/autotest_common.sh@10 -- # set +x 00:14:14.901 Dev_2 00:14:14.901 16:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.901 16:53:03 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:14:14.901 16:53:03 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:14:14.901 16:53:03 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:14.901 16:53:03 -- common/autotest_common.sh@899 -- # local i 00:14:14.901 16:53:03 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:14.901 16:53:03 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:14.901 16:53:03 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:14.901 16:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.901 16:53:03 -- common/autotest_common.sh@10 -- # set +x 00:14:14.901 16:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.901 16:53:03 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:14.901 16:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.901 16:53:03 -- common/autotest_common.sh@10 -- # set +x 00:14:14.901 [ 00:14:14.901 { 00:14:14.901 "name": "Dev_2", 00:14:14.901 "aliases": [ 00:14:14.901 "0feaa3a2-f127-4e4a-a9b5-3e8eaa4a266a" 00:14:14.901 ], 00:14:14.901 "product_name": "Malloc disk", 00:14:14.901 "block_size": 512, 00:14:14.901 "num_blocks": 262144, 00:14:14.901 "uuid": "0feaa3a2-f127-4e4a-a9b5-3e8eaa4a266a", 00:14:14.901 "assigned_rate_limits": { 00:14:14.901 "rw_ios_per_sec": 0, 00:14:14.901 "rw_mbytes_per_sec": 0, 00:14:14.901 "r_mbytes_per_sec": 0, 00:14:14.901 "w_mbytes_per_sec": 0 00:14:14.901 }, 00:14:14.901 "claimed": false, 00:14:14.901 "zoned": false, 00:14:14.901 "supported_io_types": { 00:14:14.901 "read": true, 00:14:14.901 "write": true, 00:14:14.901 "unmap": true, 00:14:14.901 "write_zeroes": true, 00:14:14.901 "flush": true, 00:14:14.901 "reset": true, 00:14:14.901 "compare": false, 00:14:14.901 "compare_and_write": false, 00:14:14.901 "abort": true, 00:14:14.901 "nvme_admin": false, 00:14:14.901 "nvme_io": false 00:14:14.901 }, 00:14:14.901 "memory_domains": [ 00:14:14.901 { 00:14:14.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.901 "dma_device_type": 2 00:14:14.901 } 00:14:14.901 ], 00:14:14.901 "driver_specific": {} 00:14:14.901 } 00:14:14.901 ] 00:14:14.901 16:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.901 16:53:03 -- common/autotest_common.sh@905 -- # return 0 00:14:14.901 16:53:03 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:14.901 16:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.901 16:53:03 -- common/autotest_common.sh@10 -- # set +x 00:14:14.901 16:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.901 16:53:03 -- bdev/blockdev.sh@513 -- # NOT wait 111416 00:14:14.901 16:53:03 -- common/autotest_common.sh@650 -- # local es=0 00:14:14.901 16:53:03 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:14.901 16:53:03 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 111416 00:14:14.901 16:53:03 -- common/autotest_common.sh@638 -- # local arg=wait 00:14:14.901 16:53:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.901 16:53:03 -- common/autotest_common.sh@642 -- # type -t wait 00:14:14.901 16:53:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.901 16:53:03 -- common/autotest_common.sh@653 -- # wait 111416 00:14:15.160 Running I/O for 5 seconds... 00:14:15.160 task offset: 134880 on job bdev=EE_Dev_1 fails 00:14:15.160 00:14:15.160 Latency(us) 00:14:15.160 [2024-11-05T16:53:04.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.160 [2024-11-05T16:53:04.037Z] Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:15.160 [2024-11-05T16:53:04.037Z] Job: EE_Dev_1 ended in about 0.00 seconds with error 00:14:15.160 EE_Dev_1 : 0.00 28460.54 111.17 6468.31 0.00 378.21 151.74 688.87 00:14:15.160 [2024-11-05T16:53:04.037Z] Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:15.160 Dev_2 : 0.00 21024.97 82.13 0.00 0.00 509.88 141.50 908.57 00:14:15.160 [2024-11-05T16:53:04.037Z] =================================================================================================================== 00:14:15.160 [2024-11-05T16:53:04.037Z] Total : 49485.51 193.30 6468.31 0.00 449.62 141.50 908.57 00:14:15.160 [2024-11-05 16:53:03.854449] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:15.160 request: 00:14:15.160 { 00:14:15.160 "method": "perform_tests", 00:14:15.160 "req_id": 1 00:14:15.160 } 00:14:15.160 Got JSON-RPC error response 00:14:15.160 response: 00:14:15.160 { 00:14:15.160 "code": -32603, 00:14:15.160 "message": "bdevperf failed with error Operation not permitted" 00:14:15.160 } 00:14:16.537 16:53:05 -- common/autotest_common.sh@653 -- # es=255 00:14:16.537 16:53:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:16.537 16:53:05 -- common/autotest_common.sh@662 -- # es=127 00:14:16.537 16:53:05 -- common/autotest_common.sh@663 -- # case "$es" in 00:14:16.537 16:53:05 -- common/autotest_common.sh@670 -- # es=1 00:14:16.537 16:53:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:16.537 00:14:16.537 real 0m11.859s 00:14:16.537 user 0m12.018s 00:14:16.537 sys 0m0.859s 00:14:16.537 16:53:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:16.537 ************************************ 00:14:16.537 END TEST bdev_error 00:14:16.537 ************************************ 00:14:16.537 16:53:05 -- common/autotest_common.sh@10 -- # set +x 00:14:16.796 16:53:05 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:14:16.796 16:53:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:16.796 16:53:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:16.796 16:53:05 -- common/autotest_common.sh@10 -- # set +x 00:14:16.796 ************************************ 00:14:16.796 START TEST bdev_stat 00:14:16.796 ************************************ 00:14:16.796 16:53:05 -- common/autotest_common.sh@1114 -- # stat_test_suite '' 00:14:16.796 16:53:05 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:14:16.796 16:53:05 -- bdev/blockdev.sh@594 -- # STAT_PID=111476 00:14:16.796 16:53:05 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:14:16.796 Process Bdev IO statistics testing pid: 111476 00:14:16.796 16:53:05 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 111476' 00:14:16.796 16:53:05 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:14:16.796 16:53:05 -- bdev/blockdev.sh@597 -- # waitforlisten 111476 00:14:16.796 16:53:05 -- common/autotest_common.sh@829 -- # '[' -z 111476 ']' 00:14:16.796 16:53:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.796 16:53:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.796 16:53:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.796 16:53:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.796 16:53:05 -- common/autotest_common.sh@10 -- # set +x 00:14:16.796 [2024-11-05 16:53:05.504318] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:16.797 [2024-11-05 16:53:05.504642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111476 ] 00:14:16.797 [2024-11-05 16:53:05.671842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:17.055 [2024-11-05 16:53:05.908717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.055 [2024-11-05 16:53:05.908742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.673 16:53:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.673 16:53:06 -- common/autotest_common.sh@862 -- # return 0 00:14:17.673 16:53:06 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:14:17.673 16:53:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.674 16:53:06 -- common/autotest_common.sh@10 -- # set +x 00:14:17.933 Malloc_STAT 00:14:17.933 16:53:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.933 16:53:06 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:14:17.933 16:53:06 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:14:17.933 16:53:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:17.933 16:53:06 -- common/autotest_common.sh@899 -- # local i 00:14:17.933 16:53:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:17.933 16:53:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:17.933 16:53:06 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:17.933 16:53:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.933 16:53:06 -- common/autotest_common.sh@10 -- # set +x 00:14:17.933 16:53:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.933 16:53:06 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:14:17.933 16:53:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.933 16:53:06 -- common/autotest_common.sh@10 -- # set +x 00:14:17.933 [ 00:14:17.933 { 00:14:17.933 "name": "Malloc_STAT", 00:14:17.933 "aliases": [ 00:14:17.933 "780324cc-d4db-4637-9865-14e2094264a2" 00:14:17.933 ], 00:14:17.933 "product_name": "Malloc disk", 00:14:17.933 "block_size": 512, 00:14:17.933 "num_blocks": 262144, 00:14:17.933 "uuid": "780324cc-d4db-4637-9865-14e2094264a2", 00:14:17.933 "assigned_rate_limits": { 00:14:17.933 "rw_ios_per_sec": 0, 00:14:17.933 "rw_mbytes_per_sec": 0, 00:14:17.933 "r_mbytes_per_sec": 0, 00:14:17.933 "w_mbytes_per_sec": 0 00:14:17.933 }, 00:14:17.933 "claimed": false, 00:14:17.933 "zoned": false, 00:14:17.933 "supported_io_types": { 00:14:17.933 "read": true, 00:14:17.933 "write": true, 00:14:17.933 "unmap": true, 00:14:17.933 "write_zeroes": true, 00:14:17.933 "flush": true, 00:14:17.933 "reset": true, 00:14:17.933 "compare": false, 00:14:17.933 "compare_and_write": false, 00:14:17.933 "abort": true, 00:14:17.933 "nvme_admin": false, 00:14:17.933 "nvme_io": false 00:14:17.933 }, 00:14:17.933 "memory_domains": [ 00:14:17.933 { 00:14:17.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.933 "dma_device_type": 2 00:14:17.933 } 00:14:17.933 ], 00:14:17.933 "driver_specific": {} 00:14:17.933 } 00:14:17.933 ] 00:14:17.933 16:53:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.933 16:53:06 -- common/autotest_common.sh@905 -- # return 0 00:14:17.933 16:53:06 -- bdev/blockdev.sh@603 -- # sleep 2 00:14:17.933 16:53:06 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:17.933 Running I/O for 10 seconds... 00:14:19.838 16:53:08 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:14:19.838 16:53:08 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:14:19.838 16:53:08 -- bdev/blockdev.sh@558 -- # local iostats 00:14:19.838 16:53:08 -- bdev/blockdev.sh@559 -- # local io_count1 00:14:19.838 16:53:08 -- bdev/blockdev.sh@560 -- # local io_count2 00:14:19.838 16:53:08 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:14:19.838 16:53:08 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:14:19.838 16:53:08 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:14:19.839 16:53:08 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:14:19.839 16:53:08 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:19.839 16:53:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.839 16:53:08 -- common/autotest_common.sh@10 -- # set +x 00:14:19.839 16:53:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.839 16:53:08 -- bdev/blockdev.sh@566 -- # iostats='{ 00:14:19.839 "tick_rate": 2200000000, 00:14:19.839 "ticks": 1706138131366, 00:14:19.839 "bdevs": [ 00:14:19.839 { 00:14:19.839 "name": "Malloc_STAT", 00:14:19.839 "bytes_read": 944804352, 00:14:19.839 "num_read_ops": 230659, 00:14:19.839 "bytes_written": 0, 00:14:19.839 "num_write_ops": 0, 00:14:19.839 "bytes_unmapped": 0, 00:14:19.839 "num_unmap_ops": 0, 00:14:19.839 "bytes_copied": 0, 00:14:19.839 "num_copy_ops": 0, 00:14:19.839 "read_latency_ticks": 2138509397440, 00:14:19.839 "max_read_latency_ticks": 13570492, 00:14:19.839 "min_read_latency_ticks": 280944, 00:14:19.839 "write_latency_ticks": 0, 00:14:19.839 "max_write_latency_ticks": 0, 00:14:19.839 "min_write_latency_ticks": 0, 00:14:19.839 "unmap_latency_ticks": 0, 00:14:19.839 "max_unmap_latency_ticks": 0, 00:14:19.839 "min_unmap_latency_ticks": 0, 00:14:19.839 "copy_latency_ticks": 0, 00:14:19.839 "max_copy_latency_ticks": 0, 00:14:19.839 "min_copy_latency_ticks": 0, 00:14:19.839 "io_error": {} 00:14:19.839 } 00:14:19.839 ] 00:14:19.839 }' 00:14:19.839 16:53:08 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:14:19.839 16:53:08 -- bdev/blockdev.sh@567 -- # io_count1=230659 00:14:19.839 16:53:08 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:14:19.839 16:53:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.839 16:53:08 -- common/autotest_common.sh@10 -- # set +x 00:14:19.839 16:53:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.839 16:53:08 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:14:19.839 "tick_rate": 2200000000, 00:14:19.839 "ticks": 1706288504180, 00:14:19.839 "name": "Malloc_STAT", 00:14:19.839 "channels": [ 00:14:19.839 { 00:14:19.839 "thread_id": 2, 00:14:19.839 "bytes_read": 486539264, 00:14:19.839 "num_read_ops": 118784, 00:14:19.839 "bytes_written": 0, 00:14:19.839 "num_write_ops": 0, 00:14:19.839 "bytes_unmapped": 0, 00:14:19.839 "num_unmap_ops": 0, 00:14:19.839 "bytes_copied": 0, 00:14:19.839 "num_copy_ops": 0, 00:14:19.839 "read_latency_ticks": 1107614532107, 00:14:19.839 "max_read_latency_ticks": 13570492, 00:14:19.839 "min_read_latency_ticks": 7630502, 00:14:19.839 "write_latency_ticks": 0, 00:14:19.839 "max_write_latency_ticks": 0, 00:14:19.839 "min_write_latency_ticks": 0, 00:14:19.839 "unmap_latency_ticks": 0, 00:14:19.839 "max_unmap_latency_ticks": 0, 00:14:19.839 "min_unmap_latency_ticks": 0, 00:14:19.839 "copy_latency_ticks": 0, 00:14:19.839 "max_copy_latency_ticks": 0, 00:14:19.839 "min_copy_latency_ticks": 0 00:14:19.839 }, 00:14:19.839 { 00:14:19.839 "thread_id": 3, 00:14:19.839 "bytes_read": 490733568, 00:14:19.839 "num_read_ops": 119808, 00:14:19.839 "bytes_written": 0, 00:14:19.839 "num_write_ops": 0, 00:14:19.839 "bytes_unmapped": 0, 00:14:19.839 "num_unmap_ops": 0, 00:14:19.839 "bytes_copied": 0, 00:14:19.839 "num_copy_ops": 0, 00:14:19.839 "read_latency_ticks": 1109461287141, 00:14:19.839 "max_read_latency_ticks": 12510999, 00:14:19.839 "min_read_latency_ticks": 6971312, 00:14:19.839 "write_latency_ticks": 0, 00:14:19.839 "max_write_latency_ticks": 0, 00:14:19.839 "min_write_latency_ticks": 0, 00:14:19.839 "unmap_latency_ticks": 0, 00:14:19.839 "max_unmap_latency_ticks": 0, 00:14:19.839 "min_unmap_latency_ticks": 0, 00:14:19.839 "copy_latency_ticks": 0, 00:14:19.839 "max_copy_latency_ticks": 0, 00:14:19.839 "min_copy_latency_ticks": 0 00:14:19.839 } 00:14:19.839 ] 00:14:19.839 }' 00:14:19.839 16:53:08 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:14:20.098 16:53:08 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=118784 00:14:20.098 16:53:08 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=118784 00:14:20.098 16:53:08 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:14:20.098 16:53:08 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=119808 00:14:20.098 16:53:08 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=238592 00:14:20.098 16:53:08 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:20.098 16:53:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.098 16:53:08 -- common/autotest_common.sh@10 -- # set +x 00:14:20.098 16:53:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.098 16:53:08 -- bdev/blockdev.sh@575 -- # iostats='{ 00:14:20.098 "tick_rate": 2200000000, 00:14:20.098 "ticks": 1706559544882, 00:14:20.098 "bdevs": [ 00:14:20.098 { 00:14:20.098 "name": "Malloc_STAT", 00:14:20.098 "bytes_read": 1033933312, 00:14:20.098 "num_read_ops": 252419, 00:14:20.098 "bytes_written": 0, 00:14:20.098 "num_write_ops": 0, 00:14:20.098 "bytes_unmapped": 0, 00:14:20.098 "num_unmap_ops": 0, 00:14:20.098 "bytes_copied": 0, 00:14:20.098 "num_copy_ops": 0, 00:14:20.098 "read_latency_ticks": 2353982396379, 00:14:20.098 "max_read_latency_ticks": 13570492, 00:14:20.098 "min_read_latency_ticks": 280944, 00:14:20.098 "write_latency_ticks": 0, 00:14:20.098 "max_write_latency_ticks": 0, 00:14:20.098 "min_write_latency_ticks": 0, 00:14:20.098 "unmap_latency_ticks": 0, 00:14:20.098 "max_unmap_latency_ticks": 0, 00:14:20.098 "min_unmap_latency_ticks": 0, 00:14:20.098 "copy_latency_ticks": 0, 00:14:20.098 "max_copy_latency_ticks": 0, 00:14:20.098 "min_copy_latency_ticks": 0, 00:14:20.098 "io_error": {} 00:14:20.098 } 00:14:20.098 ] 00:14:20.098 }' 00:14:20.098 16:53:08 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:14:20.098 16:53:08 -- bdev/blockdev.sh@576 -- # io_count2=252419 00:14:20.098 16:53:08 -- bdev/blockdev.sh@581 -- # '[' 238592 -lt 230659 ']' 00:14:20.098 16:53:08 -- bdev/blockdev.sh@581 -- # '[' 238592 -gt 252419 ']' 00:14:20.098 16:53:08 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:14:20.098 16:53:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.098 16:53:08 -- common/autotest_common.sh@10 -- # set +x 00:14:20.098 00:14:20.098 Latency(us) 00:14:20.098 [2024-11-05T16:53:08.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.098 [2024-11-05T16:53:08.975Z] Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:20.098 Malloc_STAT : 2.17 59909.99 234.02 0.00 0.00 4263.56 1042.62 6196.13 00:14:20.098 [2024-11-05T16:53:08.975Z] Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:20.098 Malloc_STAT : 2.17 60442.37 236.10 0.00 0.00 4226.32 811.75 5689.72 00:14:20.098 [2024-11-05T16:53:08.975Z] =================================================================================================================== 00:14:20.098 [2024-11-05T16:53:08.975Z] Total : 120352.36 470.13 0.00 0.00 4244.85 811.75 6196.13 00:14:20.357 0 00:14:20.357 16:53:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.357 16:53:09 -- bdev/blockdev.sh@607 -- # killprocess 111476 00:14:20.357 16:53:09 -- common/autotest_common.sh@936 -- # '[' -z 111476 ']' 00:14:20.357 16:53:09 -- common/autotest_common.sh@940 -- # kill -0 111476 00:14:20.357 16:53:09 -- common/autotest_common.sh@941 -- # uname 00:14:20.357 16:53:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:20.357 16:53:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111476 00:14:20.357 16:53:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:20.357 killing process with pid 111476 00:14:20.357 16:53:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:20.357 16:53:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111476' 00:14:20.357 Received shutdown signal, test time was about 2.302259 seconds 00:14:20.357 00:14:20.357 Latency(us) 00:14:20.357 [2024-11-05T16:53:09.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.357 [2024-11-05T16:53:09.234Z] =================================================================================================================== 00:14:20.357 [2024-11-05T16:53:09.234Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:20.357 16:53:09 -- common/autotest_common.sh@955 -- # kill 111476 00:14:20.357 16:53:09 -- common/autotest_common.sh@960 -- # wait 111476 00:14:21.747 ************************************ 00:14:21.747 END TEST bdev_stat 00:14:21.747 ************************************ 00:14:21.747 16:53:10 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:14:21.747 00:14:21.747 real 0m4.769s 00:14:21.747 user 0m9.097s 00:14:21.747 sys 0m0.380s 00:14:21.748 16:53:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:21.748 16:53:10 -- common/autotest_common.sh@10 -- # set +x 00:14:21.748 16:53:10 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:14:21.748 16:53:10 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:14:21.748 16:53:10 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:14:21.748 16:53:10 -- bdev/blockdev.sh@809 -- # cleanup 00:14:21.748 16:53:10 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:21.748 16:53:10 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:21.748 16:53:10 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:14:21.748 16:53:10 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:14:21.748 16:53:10 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:14:21.748 16:53:10 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:14:21.748 ************************************ 00:14:21.748 END TEST blockdev_general 00:14:21.748 ************************************ 00:14:21.748 00:14:21.748 real 2m18.722s 00:14:21.748 user 5m45.594s 00:14:21.748 sys 0m20.551s 00:14:21.748 16:53:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:21.748 16:53:10 -- common/autotest_common.sh@10 -- # set +x 00:14:21.748 16:53:10 -- spdk/autotest.sh@183 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:21.748 16:53:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:21.748 16:53:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:21.748 16:53:10 -- common/autotest_common.sh@10 -- # set +x 00:14:21.748 ************************************ 00:14:21.748 START TEST bdev_raid 00:14:21.748 ************************************ 00:14:21.748 16:53:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:21.748 * Looking for test storage... 00:14:21.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:21.748 16:53:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:21.748 16:53:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:21.748 16:53:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:21.748 16:53:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:21.748 16:53:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:21.748 16:53:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:21.748 16:53:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:21.748 16:53:10 -- scripts/common.sh@335 -- # IFS=.-: 00:14:21.748 16:53:10 -- scripts/common.sh@335 -- # read -ra ver1 00:14:21.748 16:53:10 -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.748 16:53:10 -- scripts/common.sh@336 -- # read -ra ver2 00:14:21.748 16:53:10 -- scripts/common.sh@337 -- # local 'op=<' 00:14:21.748 16:53:10 -- scripts/common.sh@339 -- # ver1_l=2 00:14:21.748 16:53:10 -- scripts/common.sh@340 -- # ver2_l=1 00:14:21.748 16:53:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:21.748 16:53:10 -- scripts/common.sh@343 -- # case "$op" in 00:14:21.748 16:53:10 -- scripts/common.sh@344 -- # : 1 00:14:21.748 16:53:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:21.748 16:53:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.748 16:53:10 -- scripts/common.sh@364 -- # decimal 1 00:14:21.748 16:53:10 -- scripts/common.sh@352 -- # local d=1 00:14:21.748 16:53:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.748 16:53:10 -- scripts/common.sh@354 -- # echo 1 00:14:21.748 16:53:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:21.748 16:53:10 -- scripts/common.sh@365 -- # decimal 2 00:14:21.748 16:53:10 -- scripts/common.sh@352 -- # local d=2 00:14:21.748 16:53:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.748 16:53:10 -- scripts/common.sh@354 -- # echo 2 00:14:21.748 16:53:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:21.748 16:53:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:21.748 16:53:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:21.748 16:53:10 -- scripts/common.sh@367 -- # return 0 00:14:21.748 16:53:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.748 16:53:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:21.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.748 --rc genhtml_branch_coverage=1 00:14:21.748 --rc genhtml_function_coverage=1 00:14:21.748 --rc genhtml_legend=1 00:14:21.748 --rc geninfo_all_blocks=1 00:14:21.748 --rc geninfo_unexecuted_blocks=1 00:14:21.748 00:14:21.748 ' 00:14:21.748 16:53:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:21.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.748 --rc genhtml_branch_coverage=1 00:14:21.748 --rc genhtml_function_coverage=1 00:14:21.748 --rc genhtml_legend=1 00:14:21.748 --rc geninfo_all_blocks=1 00:14:21.748 --rc geninfo_unexecuted_blocks=1 00:14:21.748 00:14:21.748 ' 00:14:21.748 16:53:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:21.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.748 --rc genhtml_branch_coverage=1 00:14:21.748 --rc genhtml_function_coverage=1 00:14:21.748 --rc genhtml_legend=1 00:14:21.748 --rc geninfo_all_blocks=1 00:14:21.748 --rc geninfo_unexecuted_blocks=1 00:14:21.748 00:14:21.748 ' 00:14:21.748 16:53:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:21.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.748 --rc genhtml_branch_coverage=1 00:14:21.748 --rc genhtml_function_coverage=1 00:14:21.748 --rc genhtml_legend=1 00:14:21.748 --rc geninfo_all_blocks=1 00:14:21.748 --rc geninfo_unexecuted_blocks=1 00:14:21.748 00:14:21.748 ' 00:14:21.748 16:53:10 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:21.748 16:53:10 -- bdev/nbd_common.sh@6 -- # set -e 00:14:21.748 16:53:10 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:14:21.748 16:53:10 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:14:21.748 16:53:10 -- bdev/bdev_raid.sh@716 -- # uname -s 00:14:21.748 16:53:10 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:14:21.748 16:53:10 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:14:21.748 16:53:10 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:14:21.748 16:53:10 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:14:21.748 16:53:10 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:14:21.748 16:53:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:21.748 16:53:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:21.748 16:53:10 -- common/autotest_common.sh@10 -- # set +x 00:14:21.748 ************************************ 00:14:21.748 START TEST raid_function_test_raid0 00:14:21.748 ************************************ 00:14:21.748 16:53:10 -- common/autotest_common.sh@1114 -- # raid_function_test raid0 00:14:21.748 16:53:10 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:14:21.748 16:53:10 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:21.748 16:53:10 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:21.748 16:53:10 -- bdev/bdev_raid.sh@86 -- # raid_pid=111642 00:14:21.748 Process raid pid: 111642 00:14:21.748 16:53:10 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 111642' 00:14:21.748 16:53:10 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:21.748 16:53:10 -- bdev/bdev_raid.sh@88 -- # waitforlisten 111642 /var/tmp/spdk-raid.sock 00:14:21.748 16:53:10 -- common/autotest_common.sh@829 -- # '[' -z 111642 ']' 00:14:21.748 16:53:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:21.748 16:53:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:21.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:21.748 16:53:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:21.748 16:53:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:21.748 16:53:10 -- common/autotest_common.sh@10 -- # set +x 00:14:21.748 [2024-11-05 16:53:10.592986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:21.748 [2024-11-05 16:53:10.593196] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.007 [2024-11-05 16:53:10.747349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.266 [2024-11-05 16:53:10.923066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.266 [2024-11-05 16:53:11.101119] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.835 16:53:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:22.835 16:53:11 -- common/autotest_common.sh@862 -- # return 0 00:14:22.835 16:53:11 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:14:22.835 16:53:11 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:14:22.835 16:53:11 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:22.835 16:53:11 -- bdev/bdev_raid.sh@70 -- # cat 00:14:22.835 16:53:11 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:23.094 [2024-11-05 16:53:11.891723] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:23.094 [2024-11-05 16:53:11.893738] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:23.094 [2024-11-05 16:53:11.893830] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:23.094 [2024-11-05 16:53:11.893843] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:23.094 [2024-11-05 16:53:11.893962] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:23.094 [2024-11-05 16:53:11.894327] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:23.094 [2024-11-05 16:53:11.894351] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006f80 00:14:23.094 [2024-11-05 16:53:11.894496] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.094 Base_1 00:14:23.094 Base_2 00:14:23.094 16:53:11 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:23.094 16:53:11 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:23.094 16:53:11 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:23.353 16:53:12 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:23.353 16:53:12 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:23.353 16:53:12 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:23.353 16:53:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:23.353 16:53:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:23.353 16:53:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:23.353 16:53:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:23.353 16:53:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:23.353 16:53:12 -- bdev/nbd_common.sh@12 -- # local i 00:14:23.353 16:53:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:23.353 16:53:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.353 16:53:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:23.612 [2024-11-05 16:53:12.371815] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:14:23.612 /dev/nbd0 00:14:23.612 16:53:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:23.612 16:53:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:23.612 16:53:12 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:23.612 16:53:12 -- common/autotest_common.sh@867 -- # local i 00:14:23.612 16:53:12 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:23.612 16:53:12 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:23.612 16:53:12 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:23.612 16:53:12 -- common/autotest_common.sh@871 -- # break 00:14:23.612 16:53:12 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:23.612 16:53:12 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:23.612 16:53:12 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.612 1+0 records in 00:14:23.612 1+0 records out 00:14:23.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003197 s, 12.8 MB/s 00:14:23.612 16:53:12 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.612 16:53:12 -- common/autotest_common.sh@884 -- # size=4096 00:14:23.612 16:53:12 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.612 16:53:12 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:23.612 16:53:12 -- common/autotest_common.sh@887 -- # return 0 00:14:23.612 16:53:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.612 16:53:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.612 16:53:12 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:23.612 16:53:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:23.612 16:53:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:23.871 16:53:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:23.871 { 00:14:23.871 "nbd_device": "/dev/nbd0", 00:14:23.871 "bdev_name": "raid" 00:14:23.871 } 00:14:23.871 ]' 00:14:23.871 16:53:12 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:23.871 { 00:14:23.871 "nbd_device": "/dev/nbd0", 00:14:23.871 "bdev_name": "raid" 00:14:23.871 } 00:14:23.871 ]' 00:14:23.871 16:53:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:23.871 16:53:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:23.871 16:53:12 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:23.871 16:53:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:23.871 16:53:12 -- bdev/nbd_common.sh@65 -- # count=1 00:14:23.871 16:53:12 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:23.871 16:53:12 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:23.871 16:53:12 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:23.871 16:53:12 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:23.871 16:53:12 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:23.871 16:53:12 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:23.871 16:53:12 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:23.871 16:53:12 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:23.871 16:53:12 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:23.871 16:53:12 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:23.871 16:53:12 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:24.131 16:53:12 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:24.131 16:53:12 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:24.131 16:53:12 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:24.131 16:53:12 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:14:24.131 16:53:12 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:24.131 16:53:12 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:14:24.131 16:53:12 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:24.131 16:53:12 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:24.131 16:53:12 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:24.131 16:53:12 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:24.131 4096+0 records in 00:14:24.131 4096+0 records out 00:14:24.131 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0199709 s, 105 MB/s 00:14:24.131 16:53:12 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:24.389 4096+0 records in 00:14:24.389 4096+0 records out 00:14:24.389 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.279794 s, 7.5 MB/s 00:14:24.389 16:53:13 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:24.389 16:53:13 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:24.389 16:53:13 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:24.389 16:53:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:24.390 128+0 records in 00:14:24.390 128+0 records out 00:14:24.390 65536 bytes (66 kB, 64 KiB) copied, 0.000898593 s, 72.9 MB/s 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:24.390 2035+0 records in 00:14:24.390 2035+0 records out 00:14:24.390 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00869011 s, 120 MB/s 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:24.390 456+0 records in 00:14:24.390 456+0 records out 00:14:24.390 233472 bytes (233 kB, 228 KiB) copied, 0.00192137 s, 122 MB/s 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:24.390 16:53:13 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:24.390 16:53:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:24.390 16:53:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:24.390 16:53:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:24.390 16:53:13 -- bdev/nbd_common.sh@51 -- # local i 00:14:24.390 16:53:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.390 16:53:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:24.649 16:53:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:24.649 [2024-11-05 16:53:13.481988] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.649 16:53:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:24.649 16:53:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:24.649 16:53:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.649 16:53:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.649 16:53:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:24.649 16:53:13 -- bdev/nbd_common.sh@41 -- # break 00:14:24.649 16:53:13 -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.649 16:53:13 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:24.649 16:53:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:24.649 16:53:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:24.919 16:53:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:24.919 16:53:13 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:24.919 16:53:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:24.919 16:53:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:24.919 16:53:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:24.919 16:53:13 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:24.919 16:53:13 -- bdev/nbd_common.sh@65 -- # true 00:14:24.919 16:53:13 -- bdev/nbd_common.sh@65 -- # count=0 00:14:24.919 16:53:13 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:24.919 16:53:13 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:24.919 16:53:13 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:24.919 16:53:13 -- bdev/bdev_raid.sh@111 -- # killprocess 111642 00:14:24.919 16:53:13 -- common/autotest_common.sh@936 -- # '[' -z 111642 ']' 00:14:25.192 16:53:13 -- common/autotest_common.sh@940 -- # kill -0 111642 00:14:25.192 16:53:13 -- common/autotest_common.sh@941 -- # uname 00:14:25.192 16:53:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:25.192 16:53:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111642 00:14:25.192 16:53:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:25.192 16:53:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:25.192 killing process with pid 111642 00:14:25.192 16:53:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111642' 00:14:25.192 16:53:13 -- common/autotest_common.sh@955 -- # kill 111642 00:14:25.192 [2024-11-05 16:53:13.829386] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:25.192 [2024-11-05 16:53:13.829546] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.192 [2024-11-05 16:53:13.829611] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.192 [2024-11-05 16:53:13.829626] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid, state offline 00:14:25.192 16:53:13 -- common/autotest_common.sh@960 -- # wait 111642 00:14:25.192 [2024-11-05 16:53:13.967542] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:26.128 16:53:14 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:26.128 00:14:26.128 real 0m4.421s 00:14:26.128 user 0m5.751s 00:14:26.128 sys 0m0.976s 00:14:26.128 16:53:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:26.128 16:53:14 -- common/autotest_common.sh@10 -- # set +x 00:14:26.128 ************************************ 00:14:26.128 END TEST raid_function_test_raid0 00:14:26.128 ************************************ 00:14:26.129 16:53:14 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:14:26.129 16:53:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:26.129 16:53:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:26.129 16:53:14 -- common/autotest_common.sh@10 -- # set +x 00:14:26.129 ************************************ 00:14:26.129 START TEST raid_function_test_concat 00:14:26.129 ************************************ 00:14:26.129 16:53:15 -- common/autotest_common.sh@1114 -- # raid_function_test concat 00:14:26.129 16:53:15 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:14:26.129 16:53:15 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:26.129 16:53:15 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:26.129 16:53:15 -- bdev/bdev_raid.sh@86 -- # raid_pid=111800 00:14:26.129 Process raid pid: 111800 00:14:26.129 16:53:15 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 111800' 00:14:26.129 16:53:15 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:26.129 16:53:15 -- bdev/bdev_raid.sh@88 -- # waitforlisten 111800 /var/tmp/spdk-raid.sock 00:14:26.129 16:53:15 -- common/autotest_common.sh@829 -- # '[' -z 111800 ']' 00:14:26.387 16:53:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:26.387 16:53:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:26.387 16:53:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:26.387 16:53:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.387 16:53:15 -- common/autotest_common.sh@10 -- # set +x 00:14:26.387 [2024-11-05 16:53:15.069737] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:26.387 [2024-11-05 16:53:15.069944] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.387 [2024-11-05 16:53:15.225099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.646 [2024-11-05 16:53:15.399694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.905 [2024-11-05 16:53:15.583223] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.473 16:53:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:27.473 16:53:16 -- common/autotest_common.sh@862 -- # return 0 00:14:27.473 16:53:16 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:14:27.473 16:53:16 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:14:27.473 16:53:16 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:27.473 16:53:16 -- bdev/bdev_raid.sh@70 -- # cat 00:14:27.473 16:53:16 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:27.732 [2024-11-05 16:53:16.392065] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:27.732 [2024-11-05 16:53:16.394073] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:27.732 [2024-11-05 16:53:16.394162] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:27.732 [2024-11-05 16:53:16.394176] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:27.732 [2024-11-05 16:53:16.394307] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:27.732 [2024-11-05 16:53:16.394662] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:27.732 [2024-11-05 16:53:16.394686] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006f80 00:14:27.732 [2024-11-05 16:53:16.394833] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.732 Base_1 00:14:27.732 Base_2 00:14:27.732 16:53:16 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:27.732 16:53:16 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:27.732 16:53:16 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:27.992 16:53:16 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:27.992 16:53:16 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:27.992 16:53:16 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:27.992 16:53:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:27.992 16:53:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:27.992 16:53:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:27.992 16:53:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:27.992 16:53:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:27.992 16:53:16 -- bdev/nbd_common.sh@12 -- # local i 00:14:27.992 16:53:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:27.992 16:53:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:27.992 16:53:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:27.992 [2024-11-05 16:53:16.864163] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:14:28.251 /dev/nbd0 00:14:28.251 16:53:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:28.251 16:53:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:28.251 16:53:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:28.251 16:53:16 -- common/autotest_common.sh@867 -- # local i 00:14:28.251 16:53:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:28.251 16:53:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:28.251 16:53:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:28.251 16:53:16 -- common/autotest_common.sh@871 -- # break 00:14:28.251 16:53:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:28.251 16:53:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:28.251 16:53:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.251 1+0 records in 00:14:28.251 1+0 records out 00:14:28.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029533 s, 13.9 MB/s 00:14:28.251 16:53:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.251 16:53:16 -- common/autotest_common.sh@884 -- # size=4096 00:14:28.251 16:53:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.251 16:53:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:28.251 16:53:16 -- common/autotest_common.sh@887 -- # return 0 00:14:28.251 16:53:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.251 16:53:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.251 16:53:16 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:28.251 16:53:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:28.251 16:53:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:28.251 16:53:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:28.251 { 00:14:28.251 "nbd_device": "/dev/nbd0", 00:14:28.251 "bdev_name": "raid" 00:14:28.251 } 00:14:28.251 ]' 00:14:28.251 16:53:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:28.251 { 00:14:28.251 "nbd_device": "/dev/nbd0", 00:14:28.251 "bdev_name": "raid" 00:14:28.251 } 00:14:28.251 ]' 00:14:28.251 16:53:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:28.509 16:53:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:28.510 16:53:17 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:28.510 16:53:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:28.510 16:53:17 -- bdev/nbd_common.sh@65 -- # count=1 00:14:28.510 16:53:17 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:28.510 4096+0 records in 00:14:28.510 4096+0 records out 00:14:28.510 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0284133 s, 73.8 MB/s 00:14:28.510 16:53:17 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:28.768 4096+0 records in 00:14:28.768 4096+0 records out 00:14:28.768 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.277102 s, 7.6 MB/s 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:28.768 128+0 records in 00:14:28.768 128+0 records out 00:14:28.768 65536 bytes (66 kB, 64 KiB) copied, 0.000513284 s, 128 MB/s 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:28.768 2035+0 records in 00:14:28.768 2035+0 records out 00:14:28.768 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00711036 s, 147 MB/s 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:28.768 456+0 records in 00:14:28.768 456+0 records out 00:14:28.768 233472 bytes (233 kB, 228 KiB) copied, 0.00175073 s, 133 MB/s 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:28.768 16:53:17 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:28.768 16:53:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:28.768 16:53:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:28.768 16:53:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:28.768 16:53:17 -- bdev/nbd_common.sh@51 -- # local i 00:14:28.768 16:53:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.768 16:53:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:29.027 [2024-11-05 16:53:17.888713] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.027 16:53:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:29.027 16:53:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:29.027 16:53:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:29.027 16:53:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.027 16:53:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.027 16:53:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:29.027 16:53:17 -- bdev/nbd_common.sh@41 -- # break 00:14:29.027 16:53:17 -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.027 16:53:17 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:29.027 16:53:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:29.027 16:53:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:29.594 16:53:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:29.594 16:53:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:29.594 16:53:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:29.594 16:53:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:29.594 16:53:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:29.595 16:53:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:29.595 16:53:18 -- bdev/nbd_common.sh@65 -- # true 00:14:29.595 16:53:18 -- bdev/nbd_common.sh@65 -- # count=0 00:14:29.595 16:53:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:29.595 16:53:18 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:29.595 16:53:18 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:29.595 16:53:18 -- bdev/bdev_raid.sh@111 -- # killprocess 111800 00:14:29.595 16:53:18 -- common/autotest_common.sh@936 -- # '[' -z 111800 ']' 00:14:29.595 16:53:18 -- common/autotest_common.sh@940 -- # kill -0 111800 00:14:29.595 16:53:18 -- common/autotest_common.sh@941 -- # uname 00:14:29.595 16:53:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:29.595 16:53:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111800 00:14:29.595 16:53:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:29.595 16:53:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:29.595 killing process with pid 111800 00:14:29.595 16:53:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111800' 00:14:29.595 16:53:18 -- common/autotest_common.sh@955 -- # kill 111800 00:14:29.595 [2024-11-05 16:53:18.272829] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:29.595 [2024-11-05 16:53:18.272926] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.595 [2024-11-05 16:53:18.272993] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.595 [2024-11-05 16:53:18.273003] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid, state offline 00:14:29.595 16:53:18 -- common/autotest_common.sh@960 -- # wait 111800 00:14:29.595 [2024-11-05 16:53:18.410243] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:30.532 16:53:19 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:30.532 00:14:30.532 real 0m4.371s 00:14:30.532 user 0m5.755s 00:14:30.532 sys 0m0.890s 00:14:30.532 16:53:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:30.532 16:53:19 -- common/autotest_common.sh@10 -- # set +x 00:14:30.532 ************************************ 00:14:30.532 END TEST raid_function_test_concat 00:14:30.532 ************************************ 00:14:30.791 16:53:19 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:14:30.791 16:53:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:30.791 16:53:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:30.791 16:53:19 -- common/autotest_common.sh@10 -- # set +x 00:14:30.791 ************************************ 00:14:30.791 START TEST raid0_resize_test 00:14:30.791 ************************************ 00:14:30.791 16:53:19 -- common/autotest_common.sh@1114 -- # raid0_resize_test 00:14:30.791 16:53:19 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:14:30.791 16:53:19 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:14:30.791 16:53:19 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:14:30.791 16:53:19 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:14:30.791 16:53:19 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:14:30.791 16:53:19 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:14:30.791 16:53:19 -- bdev/bdev_raid.sh@301 -- # raid_pid=111956 00:14:30.791 Process raid pid: 111956 00:14:30.791 16:53:19 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 111956' 00:14:30.791 16:53:19 -- bdev/bdev_raid.sh@303 -- # waitforlisten 111956 /var/tmp/spdk-raid.sock 00:14:30.791 16:53:19 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:30.791 16:53:19 -- common/autotest_common.sh@829 -- # '[' -z 111956 ']' 00:14:30.791 16:53:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:30.791 16:53:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:30.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:30.791 16:53:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:30.791 16:53:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:30.791 16:53:19 -- common/autotest_common.sh@10 -- # set +x 00:14:30.791 [2024-11-05 16:53:19.514421] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:30.791 [2024-11-05 16:53:19.514658] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.791 [2024-11-05 16:53:19.677293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.050 [2024-11-05 16:53:19.846065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.309 [2024-11-05 16:53:20.026345] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.569 16:53:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.569 16:53:20 -- common/autotest_common.sh@862 -- # return 0 00:14:31.569 16:53:20 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:31.827 Base_1 00:14:31.827 16:53:20 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:32.086 Base_2 00:14:32.086 16:53:20 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:32.346 [2024-11-05 16:53:21.082589] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:32.346 [2024-11-05 16:53:21.084339] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:32.346 [2024-11-05 16:53:21.084429] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:32.346 [2024-11-05 16:53:21.084441] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:32.346 [2024-11-05 16:53:21.084566] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005450 00:14:32.346 [2024-11-05 16:53:21.084861] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:32.346 [2024-11-05 16:53:21.084885] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000006f80 00:14:32.346 [2024-11-05 16:53:21.085027] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.346 16:53:21 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:32.606 [2024-11-05 16:53:21.322635] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:32.606 [2024-11-05 16:53:21.322661] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:32.606 true 00:14:32.606 16:53:21 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:32.606 16:53:21 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:14:32.865 [2024-11-05 16:53:21.514761] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:32.865 16:53:21 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:14:32.865 16:53:21 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:14:32.865 16:53:21 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:14:32.865 16:53:21 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:32.865 [2024-11-05 16:53:21.706674] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:32.865 [2024-11-05 16:53:21.706702] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:32.865 [2024-11-05 16:53:21.706775] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:14:32.865 [2024-11-05 16:53:21.706833] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:32.865 true 00:14:32.865 16:53:21 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:32.865 16:53:21 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:14:33.125 [2024-11-05 16:53:21.902826] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.125 16:53:21 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:14:33.125 16:53:21 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:14:33.125 16:53:21 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:14:33.125 16:53:21 -- bdev/bdev_raid.sh@332 -- # killprocess 111956 00:14:33.125 16:53:21 -- common/autotest_common.sh@936 -- # '[' -z 111956 ']' 00:14:33.125 16:53:21 -- common/autotest_common.sh@940 -- # kill -0 111956 00:14:33.125 16:53:21 -- common/autotest_common.sh@941 -- # uname 00:14:33.125 16:53:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:33.125 16:53:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111956 00:14:33.125 16:53:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:33.125 16:53:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:33.125 killing process with pid 111956 00:14:33.125 16:53:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111956' 00:14:33.125 16:53:21 -- common/autotest_common.sh@955 -- # kill 111956 00:14:33.125 [2024-11-05 16:53:21.945413] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.125 [2024-11-05 16:53:21.945492] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.125 [2024-11-05 16:53:21.945546] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.125 [2024-11-05 16:53:21.945558] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Raid, state offline 00:14:33.125 [2024-11-05 16:53:21.946146] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.125 16:53:21 -- common/autotest_common.sh@960 -- # wait 111956 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@334 -- # return 0 00:14:34.064 00:14:34.064 real 0m3.423s 00:14:34.064 user 0m4.862s 00:14:34.064 sys 0m0.503s 00:14:34.064 16:53:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:34.064 16:53:22 -- common/autotest_common.sh@10 -- # set +x 00:14:34.064 ************************************ 00:14:34.064 END TEST raid0_resize_test 00:14:34.064 ************************************ 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:34.064 16:53:22 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:34.064 16:53:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:34.064 16:53:22 -- common/autotest_common.sh@10 -- # set +x 00:14:34.064 ************************************ 00:14:34.064 START TEST raid_state_function_test 00:14:34.064 ************************************ 00:14:34.064 16:53:22 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 false 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@226 -- # raid_pid=112045 00:14:34.064 Process raid pid: 112045 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 112045' 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@228 -- # waitforlisten 112045 /var/tmp/spdk-raid.sock 00:14:34.064 16:53:22 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:34.064 16:53:22 -- common/autotest_common.sh@829 -- # '[' -z 112045 ']' 00:14:34.064 16:53:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:34.064 16:53:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:34.064 16:53:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:34.064 16:53:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.064 16:53:22 -- common/autotest_common.sh@10 -- # set +x 00:14:34.323 [2024-11-05 16:53:23.002930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:34.323 [2024-11-05 16:53:23.003125] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.324 [2024-11-05 16:53:23.172220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.583 [2024-11-05 16:53:23.338687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.841 [2024-11-05 16:53:23.512838] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.101 16:53:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.101 16:53:23 -- common/autotest_common.sh@862 -- # return 0 00:14:35.101 16:53:23 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:35.361 [2024-11-05 16:53:24.173623] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:35.361 [2024-11-05 16:53:24.173736] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:35.361 [2024-11-05 16:53:24.173765] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:35.361 [2024-11-05 16:53:24.173783] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:35.361 16:53:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:35.361 16:53:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:35.361 16:53:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:35.361 16:53:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:35.361 16:53:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:35.361 16:53:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:35.361 16:53:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:35.361 16:53:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:35.361 16:53:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:35.361 16:53:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:35.361 16:53:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.361 16:53:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.620 16:53:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:35.620 "name": "Existed_Raid", 00:14:35.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.620 "strip_size_kb": 64, 00:14:35.620 "state": "configuring", 00:14:35.620 "raid_level": "raid0", 00:14:35.620 "superblock": false, 00:14:35.620 "num_base_bdevs": 2, 00:14:35.620 "num_base_bdevs_discovered": 0, 00:14:35.620 "num_base_bdevs_operational": 2, 00:14:35.620 "base_bdevs_list": [ 00:14:35.620 { 00:14:35.620 "name": "BaseBdev1", 00:14:35.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.620 "is_configured": false, 00:14:35.620 "data_offset": 0, 00:14:35.620 "data_size": 0 00:14:35.620 }, 00:14:35.620 { 00:14:35.620 "name": "BaseBdev2", 00:14:35.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.620 "is_configured": false, 00:14:35.620 "data_offset": 0, 00:14:35.620 "data_size": 0 00:14:35.620 } 00:14:35.620 ] 00:14:35.620 }' 00:14:35.620 16:53:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:35.620 16:53:24 -- common/autotest_common.sh@10 -- # set +x 00:14:36.189 16:53:25 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:36.448 [2024-11-05 16:53:25.313776] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:36.448 [2024-11-05 16:53:25.313848] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:14:36.448 16:53:25 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:36.707 [2024-11-05 16:53:25.541812] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:36.707 [2024-11-05 16:53:25.541911] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:36.707 [2024-11-05 16:53:25.541939] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.707 [2024-11-05 16:53:25.541962] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.707 16:53:25 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:36.966 [2024-11-05 16:53:25.766551] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.966 BaseBdev1 00:14:36.966 16:53:25 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:36.966 16:53:25 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:36.966 16:53:25 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:36.966 16:53:25 -- common/autotest_common.sh@899 -- # local i 00:14:36.966 16:53:25 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:36.966 16:53:25 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:36.966 16:53:25 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:37.225 16:53:25 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:37.485 [ 00:14:37.485 { 00:14:37.485 "name": "BaseBdev1", 00:14:37.485 "aliases": [ 00:14:37.485 "87695781-a9e5-4f69-811b-39cf69986995" 00:14:37.485 ], 00:14:37.485 "product_name": "Malloc disk", 00:14:37.485 "block_size": 512, 00:14:37.485 "num_blocks": 65536, 00:14:37.485 "uuid": "87695781-a9e5-4f69-811b-39cf69986995", 00:14:37.485 "assigned_rate_limits": { 00:14:37.485 "rw_ios_per_sec": 0, 00:14:37.485 "rw_mbytes_per_sec": 0, 00:14:37.485 "r_mbytes_per_sec": 0, 00:14:37.485 "w_mbytes_per_sec": 0 00:14:37.485 }, 00:14:37.485 "claimed": true, 00:14:37.485 "claim_type": "exclusive_write", 00:14:37.485 "zoned": false, 00:14:37.485 "supported_io_types": { 00:14:37.485 "read": true, 00:14:37.485 "write": true, 00:14:37.485 "unmap": true, 00:14:37.485 "write_zeroes": true, 00:14:37.485 "flush": true, 00:14:37.485 "reset": true, 00:14:37.485 "compare": false, 00:14:37.485 "compare_and_write": false, 00:14:37.485 "abort": true, 00:14:37.485 "nvme_admin": false, 00:14:37.485 "nvme_io": false 00:14:37.485 }, 00:14:37.485 "memory_domains": [ 00:14:37.485 { 00:14:37.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.485 "dma_device_type": 2 00:14:37.485 } 00:14:37.485 ], 00:14:37.485 "driver_specific": {} 00:14:37.485 } 00:14:37.485 ] 00:14:37.485 16:53:26 -- common/autotest_common.sh@905 -- # return 0 00:14:37.485 16:53:26 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:37.485 16:53:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:37.485 16:53:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:37.485 16:53:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:37.485 16:53:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:37.485 16:53:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:37.485 16:53:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:37.485 16:53:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:37.485 16:53:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:37.485 16:53:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:37.485 16:53:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.485 16:53:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.744 16:53:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:37.744 "name": "Existed_Raid", 00:14:37.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.744 "strip_size_kb": 64, 00:14:37.744 "state": "configuring", 00:14:37.744 "raid_level": "raid0", 00:14:37.744 "superblock": false, 00:14:37.744 "num_base_bdevs": 2, 00:14:37.744 "num_base_bdevs_discovered": 1, 00:14:37.744 "num_base_bdevs_operational": 2, 00:14:37.744 "base_bdevs_list": [ 00:14:37.744 { 00:14:37.744 "name": "BaseBdev1", 00:14:37.744 "uuid": "87695781-a9e5-4f69-811b-39cf69986995", 00:14:37.744 "is_configured": true, 00:14:37.744 "data_offset": 0, 00:14:37.744 "data_size": 65536 00:14:37.744 }, 00:14:37.744 { 00:14:37.744 "name": "BaseBdev2", 00:14:37.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.744 "is_configured": false, 00:14:37.744 "data_offset": 0, 00:14:37.744 "data_size": 0 00:14:37.744 } 00:14:37.744 ] 00:14:37.744 }' 00:14:37.744 16:53:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:37.744 16:53:26 -- common/autotest_common.sh@10 -- # set +x 00:14:38.313 16:53:27 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:38.572 [2024-11-05 16:53:27.314937] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:38.572 [2024-11-05 16:53:27.315006] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:38.572 16:53:27 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:38.572 16:53:27 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:38.831 [2024-11-05 16:53:27.571048] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:38.831 [2024-11-05 16:53:27.573041] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:38.831 [2024-11-05 16:53:27.573121] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:38.831 16:53:27 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:38.831 16:53:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:38.831 16:53:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:38.831 16:53:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:38.831 16:53:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:38.831 16:53:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:38.831 16:53:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:38.831 16:53:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:38.831 16:53:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:38.831 16:53:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:38.831 16:53:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:38.831 16:53:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:38.831 16:53:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.831 16:53:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.115 16:53:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:39.115 "name": "Existed_Raid", 00:14:39.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.115 "strip_size_kb": 64, 00:14:39.115 "state": "configuring", 00:14:39.115 "raid_level": "raid0", 00:14:39.115 "superblock": false, 00:14:39.115 "num_base_bdevs": 2, 00:14:39.115 "num_base_bdevs_discovered": 1, 00:14:39.115 "num_base_bdevs_operational": 2, 00:14:39.115 "base_bdevs_list": [ 00:14:39.115 { 00:14:39.115 "name": "BaseBdev1", 00:14:39.115 "uuid": "87695781-a9e5-4f69-811b-39cf69986995", 00:14:39.115 "is_configured": true, 00:14:39.115 "data_offset": 0, 00:14:39.115 "data_size": 65536 00:14:39.115 }, 00:14:39.115 { 00:14:39.115 "name": "BaseBdev2", 00:14:39.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.115 "is_configured": false, 00:14:39.115 "data_offset": 0, 00:14:39.115 "data_size": 0 00:14:39.115 } 00:14:39.115 ] 00:14:39.115 }' 00:14:39.115 16:53:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:39.115 16:53:27 -- common/autotest_common.sh@10 -- # set +x 00:14:39.706 16:53:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:39.966 [2024-11-05 16:53:28.666027] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.966 [2024-11-05 16:53:28.666091] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:39.966 [2024-11-05 16:53:28.666101] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:39.966 [2024-11-05 16:53:28.666206] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:39.966 [2024-11-05 16:53:28.666580] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:39.966 [2024-11-05 16:53:28.666601] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:14:39.966 [2024-11-05 16:53:28.666929] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.966 BaseBdev2 00:14:39.966 16:53:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:39.966 16:53:28 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:39.966 16:53:28 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:39.966 16:53:28 -- common/autotest_common.sh@899 -- # local i 00:14:39.966 16:53:28 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:39.966 16:53:28 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:39.966 16:53:28 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:40.225 16:53:28 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:40.225 [ 00:14:40.225 { 00:14:40.225 "name": "BaseBdev2", 00:14:40.225 "aliases": [ 00:14:40.225 "97f17d08-580a-4c3e-928d-f7baaa82a970" 00:14:40.225 ], 00:14:40.225 "product_name": "Malloc disk", 00:14:40.225 "block_size": 512, 00:14:40.225 "num_blocks": 65536, 00:14:40.225 "uuid": "97f17d08-580a-4c3e-928d-f7baaa82a970", 00:14:40.225 "assigned_rate_limits": { 00:14:40.225 "rw_ios_per_sec": 0, 00:14:40.225 "rw_mbytes_per_sec": 0, 00:14:40.225 "r_mbytes_per_sec": 0, 00:14:40.225 "w_mbytes_per_sec": 0 00:14:40.225 }, 00:14:40.225 "claimed": true, 00:14:40.225 "claim_type": "exclusive_write", 00:14:40.225 "zoned": false, 00:14:40.225 "supported_io_types": { 00:14:40.225 "read": true, 00:14:40.225 "write": true, 00:14:40.225 "unmap": true, 00:14:40.225 "write_zeroes": true, 00:14:40.225 "flush": true, 00:14:40.225 "reset": true, 00:14:40.225 "compare": false, 00:14:40.225 "compare_and_write": false, 00:14:40.225 "abort": true, 00:14:40.225 "nvme_admin": false, 00:14:40.225 "nvme_io": false 00:14:40.225 }, 00:14:40.225 "memory_domains": [ 00:14:40.225 { 00:14:40.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.225 "dma_device_type": 2 00:14:40.225 } 00:14:40.225 ], 00:14:40.225 "driver_specific": {} 00:14:40.225 } 00:14:40.225 ] 00:14:40.484 16:53:29 -- common/autotest_common.sh@905 -- # return 0 00:14:40.485 16:53:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:40.485 16:53:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:40.485 16:53:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:40.485 16:53:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:40.485 16:53:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:40.485 16:53:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:40.485 16:53:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:40.485 16:53:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:40.485 16:53:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:40.485 16:53:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:40.485 16:53:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:40.485 16:53:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:40.485 16:53:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.485 16:53:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.485 16:53:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:40.485 "name": "Existed_Raid", 00:14:40.485 "uuid": "616124dd-2392-402b-aa6c-cc2d9bee0300", 00:14:40.485 "strip_size_kb": 64, 00:14:40.485 "state": "online", 00:14:40.485 "raid_level": "raid0", 00:14:40.485 "superblock": false, 00:14:40.485 "num_base_bdevs": 2, 00:14:40.485 "num_base_bdevs_discovered": 2, 00:14:40.485 "num_base_bdevs_operational": 2, 00:14:40.485 "base_bdevs_list": [ 00:14:40.485 { 00:14:40.485 "name": "BaseBdev1", 00:14:40.485 "uuid": "87695781-a9e5-4f69-811b-39cf69986995", 00:14:40.485 "is_configured": true, 00:14:40.485 "data_offset": 0, 00:14:40.485 "data_size": 65536 00:14:40.485 }, 00:14:40.485 { 00:14:40.485 "name": "BaseBdev2", 00:14:40.485 "uuid": "97f17d08-580a-4c3e-928d-f7baaa82a970", 00:14:40.485 "is_configured": true, 00:14:40.485 "data_offset": 0, 00:14:40.485 "data_size": 65536 00:14:40.485 } 00:14:40.485 ] 00:14:40.485 }' 00:14:40.485 16:53:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:40.485 16:53:29 -- common/autotest_common.sh@10 -- # set +x 00:14:41.421 16:53:29 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:41.421 [2024-11-05 16:53:30.198521] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:41.421 [2024-11-05 16:53:30.198563] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.421 [2024-11-05 16:53:30.198665] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.421 16:53:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:41.421 16:53:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:41.421 16:53:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:41.421 16:53:30 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:41.421 16:53:30 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:41.421 16:53:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:41.421 16:53:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:41.421 16:53:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:41.421 16:53:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:41.421 16:53:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:41.421 16:53:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:41.421 16:53:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:41.421 16:53:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:41.421 16:53:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:41.421 16:53:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:41.421 16:53:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.421 16:53:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.680 16:53:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:41.680 "name": "Existed_Raid", 00:14:41.680 "uuid": "616124dd-2392-402b-aa6c-cc2d9bee0300", 00:14:41.680 "strip_size_kb": 64, 00:14:41.680 "state": "offline", 00:14:41.680 "raid_level": "raid0", 00:14:41.680 "superblock": false, 00:14:41.680 "num_base_bdevs": 2, 00:14:41.680 "num_base_bdevs_discovered": 1, 00:14:41.680 "num_base_bdevs_operational": 1, 00:14:41.680 "base_bdevs_list": [ 00:14:41.680 { 00:14:41.680 "name": null, 00:14:41.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.680 "is_configured": false, 00:14:41.680 "data_offset": 0, 00:14:41.680 "data_size": 65536 00:14:41.680 }, 00:14:41.680 { 00:14:41.680 "name": "BaseBdev2", 00:14:41.680 "uuid": "97f17d08-580a-4c3e-928d-f7baaa82a970", 00:14:41.680 "is_configured": true, 00:14:41.680 "data_offset": 0, 00:14:41.680 "data_size": 65536 00:14:41.680 } 00:14:41.680 ] 00:14:41.680 }' 00:14:41.680 16:53:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:41.680 16:53:30 -- common/autotest_common.sh@10 -- # set +x 00:14:42.616 16:53:31 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:42.616 16:53:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:42.616 16:53:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.616 16:53:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:42.616 16:53:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:42.616 16:53:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:42.616 16:53:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:42.875 [2024-11-05 16:53:31.713784] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:42.875 [2024-11-05 16:53:31.713859] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:14:43.134 16:53:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:43.134 16:53:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:43.134 16:53:31 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:43.134 16:53:31 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.392 16:53:32 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:43.392 16:53:32 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:43.392 16:53:32 -- bdev/bdev_raid.sh@287 -- # killprocess 112045 00:14:43.392 16:53:32 -- common/autotest_common.sh@936 -- # '[' -z 112045 ']' 00:14:43.392 16:53:32 -- common/autotest_common.sh@940 -- # kill -0 112045 00:14:43.393 16:53:32 -- common/autotest_common.sh@941 -- # uname 00:14:43.393 16:53:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:43.393 16:53:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112045 00:14:43.393 16:53:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:43.393 16:53:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:43.393 16:53:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112045' 00:14:43.393 killing process with pid 112045 00:14:43.393 16:53:32 -- common/autotest_common.sh@955 -- # kill 112045 00:14:43.393 [2024-11-05 16:53:32.080878] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:43.393 16:53:32 -- common/autotest_common.sh@960 -- # wait 112045 00:14:43.393 [2024-11-05 16:53:32.081040] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:44.329 00:14:44.329 real 0m10.090s 00:14:44.329 user 0m17.610s 00:14:44.329 sys 0m1.212s 00:14:44.329 16:53:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:44.329 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:14:44.329 ************************************ 00:14:44.329 END TEST raid_state_function_test 00:14:44.329 ************************************ 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:44.329 16:53:33 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:44.329 16:53:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:44.329 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:14:44.329 ************************************ 00:14:44.329 START TEST raid_state_function_test_sb 00:14:44.329 ************************************ 00:14:44.329 16:53:33 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 true 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@226 -- # raid_pid=112366 00:14:44.329 16:53:33 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 112366' 00:14:44.329 Process raid pid: 112366 00:14:44.330 16:53:33 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:44.330 16:53:33 -- bdev/bdev_raid.sh@228 -- # waitforlisten 112366 /var/tmp/spdk-raid.sock 00:14:44.330 16:53:33 -- common/autotest_common.sh@829 -- # '[' -z 112366 ']' 00:14:44.330 16:53:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:44.330 16:53:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:44.330 16:53:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:44.330 16:53:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.330 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:14:44.330 [2024-11-05 16:53:33.151439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:44.330 [2024-11-05 16:53:33.151630] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.588 [2024-11-05 16:53:33.319873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.847 [2024-11-05 16:53:33.490599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.847 [2024-11-05 16:53:33.670297] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.415 16:53:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.415 16:53:34 -- common/autotest_common.sh@862 -- # return 0 00:14:45.415 16:53:34 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:45.674 [2024-11-05 16:53:34.334756] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.674 [2024-11-05 16:53:34.334837] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.674 [2024-11-05 16:53:34.334849] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.674 [2024-11-05 16:53:34.334867] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.674 16:53:34 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:45.674 16:53:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:45.674 16:53:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:45.674 16:53:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:45.674 16:53:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:45.674 16:53:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:45.674 16:53:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:45.674 16:53:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:45.674 16:53:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:45.674 16:53:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:45.674 16:53:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.674 16:53:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.674 16:53:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:45.674 "name": "Existed_Raid", 00:14:45.674 "uuid": "1b6ef8c8-c7a5-4ba5-88d6-1c614943098d", 00:14:45.674 "strip_size_kb": 64, 00:14:45.674 "state": "configuring", 00:14:45.674 "raid_level": "raid0", 00:14:45.674 "superblock": true, 00:14:45.674 "num_base_bdevs": 2, 00:14:45.674 "num_base_bdevs_discovered": 0, 00:14:45.674 "num_base_bdevs_operational": 2, 00:14:45.674 "base_bdevs_list": [ 00:14:45.674 { 00:14:45.674 "name": "BaseBdev1", 00:14:45.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.674 "is_configured": false, 00:14:45.674 "data_offset": 0, 00:14:45.674 "data_size": 0 00:14:45.674 }, 00:14:45.674 { 00:14:45.674 "name": "BaseBdev2", 00:14:45.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.674 "is_configured": false, 00:14:45.674 "data_offset": 0, 00:14:45.674 "data_size": 0 00:14:45.674 } 00:14:45.674 ] 00:14:45.674 }' 00:14:45.674 16:53:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:45.674 16:53:34 -- common/autotest_common.sh@10 -- # set +x 00:14:46.611 16:53:35 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:46.611 [2024-11-05 16:53:35.342862] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:46.611 [2024-11-05 16:53:35.342961] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:14:46.611 16:53:35 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:46.883 [2024-11-05 16:53:35.534907] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:46.883 [2024-11-05 16:53:35.534997] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:46.883 [2024-11-05 16:53:35.535009] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:46.883 [2024-11-05 16:53:35.535036] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:46.883 16:53:35 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:46.883 [2024-11-05 16:53:35.757512] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.883 BaseBdev1 00:14:47.154 16:53:35 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:47.154 16:53:35 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:47.154 16:53:35 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:47.154 16:53:35 -- common/autotest_common.sh@899 -- # local i 00:14:47.154 16:53:35 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:47.154 16:53:35 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:47.154 16:53:35 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:47.154 16:53:35 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:47.413 [ 00:14:47.413 { 00:14:47.413 "name": "BaseBdev1", 00:14:47.413 "aliases": [ 00:14:47.413 "f65917ee-a068-4ea6-b115-1d27ff62c966" 00:14:47.413 ], 00:14:47.413 "product_name": "Malloc disk", 00:14:47.413 "block_size": 512, 00:14:47.413 "num_blocks": 65536, 00:14:47.413 "uuid": "f65917ee-a068-4ea6-b115-1d27ff62c966", 00:14:47.413 "assigned_rate_limits": { 00:14:47.413 "rw_ios_per_sec": 0, 00:14:47.413 "rw_mbytes_per_sec": 0, 00:14:47.413 "r_mbytes_per_sec": 0, 00:14:47.413 "w_mbytes_per_sec": 0 00:14:47.413 }, 00:14:47.413 "claimed": true, 00:14:47.413 "claim_type": "exclusive_write", 00:14:47.413 "zoned": false, 00:14:47.413 "supported_io_types": { 00:14:47.413 "read": true, 00:14:47.413 "write": true, 00:14:47.413 "unmap": true, 00:14:47.413 "write_zeroes": true, 00:14:47.413 "flush": true, 00:14:47.413 "reset": true, 00:14:47.413 "compare": false, 00:14:47.413 "compare_and_write": false, 00:14:47.413 "abort": true, 00:14:47.413 "nvme_admin": false, 00:14:47.413 "nvme_io": false 00:14:47.413 }, 00:14:47.413 "memory_domains": [ 00:14:47.413 { 00:14:47.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.413 "dma_device_type": 2 00:14:47.413 } 00:14:47.413 ], 00:14:47.413 "driver_specific": {} 00:14:47.413 } 00:14:47.413 ] 00:14:47.413 16:53:36 -- common/autotest_common.sh@905 -- # return 0 00:14:47.413 16:53:36 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:47.413 16:53:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:47.413 16:53:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:47.413 16:53:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:47.413 16:53:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:47.413 16:53:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:47.413 16:53:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:47.413 16:53:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:47.413 16:53:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:47.413 16:53:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:47.413 16:53:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.413 16:53:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.672 16:53:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:47.672 "name": "Existed_Raid", 00:14:47.672 "uuid": "dbd4bd60-d5ca-47de-baa1-61ecf4041537", 00:14:47.672 "strip_size_kb": 64, 00:14:47.672 "state": "configuring", 00:14:47.672 "raid_level": "raid0", 00:14:47.672 "superblock": true, 00:14:47.672 "num_base_bdevs": 2, 00:14:47.672 "num_base_bdevs_discovered": 1, 00:14:47.672 "num_base_bdevs_operational": 2, 00:14:47.672 "base_bdevs_list": [ 00:14:47.672 { 00:14:47.672 "name": "BaseBdev1", 00:14:47.672 "uuid": "f65917ee-a068-4ea6-b115-1d27ff62c966", 00:14:47.672 "is_configured": true, 00:14:47.672 "data_offset": 2048, 00:14:47.672 "data_size": 63488 00:14:47.672 }, 00:14:47.672 { 00:14:47.672 "name": "BaseBdev2", 00:14:47.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.672 "is_configured": false, 00:14:47.672 "data_offset": 0, 00:14:47.672 "data_size": 0 00:14:47.672 } 00:14:47.672 ] 00:14:47.672 }' 00:14:47.672 16:53:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:47.672 16:53:36 -- common/autotest_common.sh@10 -- # set +x 00:14:48.240 16:53:37 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:48.498 [2024-11-05 16:53:37.269871] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.498 [2024-11-05 16:53:37.269944] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:48.498 16:53:37 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:48.498 16:53:37 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:48.757 16:53:37 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:49.016 BaseBdev1 00:14:49.016 16:53:37 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:49.016 16:53:37 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:49.016 16:53:37 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:49.016 16:53:37 -- common/autotest_common.sh@899 -- # local i 00:14:49.016 16:53:37 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:49.016 16:53:37 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:49.016 16:53:37 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:49.275 16:53:38 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:49.534 [ 00:14:49.534 { 00:14:49.534 "name": "BaseBdev1", 00:14:49.534 "aliases": [ 00:14:49.534 "ec21aced-94ed-486d-a6e6-c889ea12ee7c" 00:14:49.534 ], 00:14:49.534 "product_name": "Malloc disk", 00:14:49.534 "block_size": 512, 00:14:49.534 "num_blocks": 65536, 00:14:49.534 "uuid": "ec21aced-94ed-486d-a6e6-c889ea12ee7c", 00:14:49.534 "assigned_rate_limits": { 00:14:49.534 "rw_ios_per_sec": 0, 00:14:49.534 "rw_mbytes_per_sec": 0, 00:14:49.534 "r_mbytes_per_sec": 0, 00:14:49.534 "w_mbytes_per_sec": 0 00:14:49.534 }, 00:14:49.534 "claimed": false, 00:14:49.534 "zoned": false, 00:14:49.534 "supported_io_types": { 00:14:49.534 "read": true, 00:14:49.534 "write": true, 00:14:49.534 "unmap": true, 00:14:49.534 "write_zeroes": true, 00:14:49.534 "flush": true, 00:14:49.534 "reset": true, 00:14:49.534 "compare": false, 00:14:49.534 "compare_and_write": false, 00:14:49.534 "abort": true, 00:14:49.534 "nvme_admin": false, 00:14:49.534 "nvme_io": false 00:14:49.534 }, 00:14:49.534 "memory_domains": [ 00:14:49.534 { 00:14:49.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.534 "dma_device_type": 2 00:14:49.534 } 00:14:49.534 ], 00:14:49.534 "driver_specific": {} 00:14:49.534 } 00:14:49.534 ] 00:14:49.534 16:53:38 -- common/autotest_common.sh@905 -- # return 0 00:14:49.534 16:53:38 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:49.793 [2024-11-05 16:53:38.429397] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.793 [2024-11-05 16:53:38.431474] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.793 [2024-11-05 16:53:38.431542] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.793 16:53:38 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:49.793 16:53:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:49.793 16:53:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:49.793 16:53:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:49.793 16:53:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:49.793 16:53:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:49.793 16:53:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:49.793 16:53:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:49.793 16:53:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:49.793 16:53:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:49.793 16:53:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:49.793 16:53:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:49.793 16:53:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.793 16:53:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.793 16:53:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:49.793 "name": "Existed_Raid", 00:14:49.793 "uuid": "b35f2ba4-7e5b-4b55-9747-45acaea99a83", 00:14:49.793 "strip_size_kb": 64, 00:14:49.793 "state": "configuring", 00:14:49.793 "raid_level": "raid0", 00:14:49.793 "superblock": true, 00:14:49.793 "num_base_bdevs": 2, 00:14:49.793 "num_base_bdevs_discovered": 1, 00:14:49.793 "num_base_bdevs_operational": 2, 00:14:49.793 "base_bdevs_list": [ 00:14:49.793 { 00:14:49.793 "name": "BaseBdev1", 00:14:49.793 "uuid": "ec21aced-94ed-486d-a6e6-c889ea12ee7c", 00:14:49.793 "is_configured": true, 00:14:49.793 "data_offset": 2048, 00:14:49.793 "data_size": 63488 00:14:49.793 }, 00:14:49.793 { 00:14:49.793 "name": "BaseBdev2", 00:14:49.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.793 "is_configured": false, 00:14:49.793 "data_offset": 0, 00:14:49.793 "data_size": 0 00:14:49.793 } 00:14:49.793 ] 00:14:49.793 }' 00:14:49.793 16:53:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:49.793 16:53:38 -- common/autotest_common.sh@10 -- # set +x 00:14:50.361 16:53:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:50.934 [2024-11-05 16:53:39.548758] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.934 [2024-11-05 16:53:39.549003] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:14:50.934 [2024-11-05 16:53:39.549019] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:50.934 [2024-11-05 16:53:39.549162] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:50.934 [2024-11-05 16:53:39.549539] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:14:50.934 [2024-11-05 16:53:39.549564] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:14:50.934 [2024-11-05 16:53:39.549755] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.934 BaseBdev2 00:14:50.934 16:53:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:50.934 16:53:39 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:50.934 16:53:39 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:50.934 16:53:39 -- common/autotest_common.sh@899 -- # local i 00:14:50.934 16:53:39 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:50.934 16:53:39 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:50.934 16:53:39 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:50.934 16:53:39 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:51.194 [ 00:14:51.194 { 00:14:51.194 "name": "BaseBdev2", 00:14:51.194 "aliases": [ 00:14:51.194 "72eb57f8-0d7f-4d23-aefc-9667cab241e4" 00:14:51.194 ], 00:14:51.194 "product_name": "Malloc disk", 00:14:51.194 "block_size": 512, 00:14:51.194 "num_blocks": 65536, 00:14:51.194 "uuid": "72eb57f8-0d7f-4d23-aefc-9667cab241e4", 00:14:51.194 "assigned_rate_limits": { 00:14:51.194 "rw_ios_per_sec": 0, 00:14:51.194 "rw_mbytes_per_sec": 0, 00:14:51.194 "r_mbytes_per_sec": 0, 00:14:51.194 "w_mbytes_per_sec": 0 00:14:51.194 }, 00:14:51.194 "claimed": true, 00:14:51.194 "claim_type": "exclusive_write", 00:14:51.194 "zoned": false, 00:14:51.194 "supported_io_types": { 00:14:51.194 "read": true, 00:14:51.194 "write": true, 00:14:51.194 "unmap": true, 00:14:51.194 "write_zeroes": true, 00:14:51.194 "flush": true, 00:14:51.194 "reset": true, 00:14:51.194 "compare": false, 00:14:51.194 "compare_and_write": false, 00:14:51.194 "abort": true, 00:14:51.194 "nvme_admin": false, 00:14:51.194 "nvme_io": false 00:14:51.194 }, 00:14:51.194 "memory_domains": [ 00:14:51.194 { 00:14:51.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.194 "dma_device_type": 2 00:14:51.194 } 00:14:51.195 ], 00:14:51.195 "driver_specific": {} 00:14:51.195 } 00:14:51.195 ] 00:14:51.195 16:53:40 -- common/autotest_common.sh@905 -- # return 0 00:14:51.195 16:53:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:51.195 16:53:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:51.195 16:53:40 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:51.195 16:53:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:51.195 16:53:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:51.195 16:53:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:51.195 16:53:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:51.195 16:53:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:51.195 16:53:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:51.195 16:53:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:51.195 16:53:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:51.195 16:53:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:51.195 16:53:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.195 16:53:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.453 16:53:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:51.453 "name": "Existed_Raid", 00:14:51.453 "uuid": "b35f2ba4-7e5b-4b55-9747-45acaea99a83", 00:14:51.453 "strip_size_kb": 64, 00:14:51.453 "state": "online", 00:14:51.453 "raid_level": "raid0", 00:14:51.453 "superblock": true, 00:14:51.453 "num_base_bdevs": 2, 00:14:51.453 "num_base_bdevs_discovered": 2, 00:14:51.453 "num_base_bdevs_operational": 2, 00:14:51.453 "base_bdevs_list": [ 00:14:51.453 { 00:14:51.453 "name": "BaseBdev1", 00:14:51.453 "uuid": "ec21aced-94ed-486d-a6e6-c889ea12ee7c", 00:14:51.453 "is_configured": true, 00:14:51.453 "data_offset": 2048, 00:14:51.453 "data_size": 63488 00:14:51.453 }, 00:14:51.453 { 00:14:51.453 "name": "BaseBdev2", 00:14:51.453 "uuid": "72eb57f8-0d7f-4d23-aefc-9667cab241e4", 00:14:51.453 "is_configured": true, 00:14:51.453 "data_offset": 2048, 00:14:51.453 "data_size": 63488 00:14:51.453 } 00:14:51.453 ] 00:14:51.453 }' 00:14:51.453 16:53:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:51.453 16:53:40 -- common/autotest_common.sh@10 -- # set +x 00:14:52.021 16:53:40 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:52.280 [2024-11-05 16:53:41.077220] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:52.280 [2024-11-05 16:53:41.077255] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.280 [2024-11-05 16:53:41.077320] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.280 16:53:41 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:52.280 16:53:41 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:52.280 16:53:41 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:52.280 16:53:41 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:52.280 16:53:41 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:52.280 16:53:41 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:52.280 16:53:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:52.280 16:53:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:52.280 16:53:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:52.280 16:53:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:52.280 16:53:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:52.280 16:53:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:52.280 16:53:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:52.280 16:53:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:52.280 16:53:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:52.280 16:53:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.280 16:53:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.539 16:53:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:52.539 "name": "Existed_Raid", 00:14:52.539 "uuid": "b35f2ba4-7e5b-4b55-9747-45acaea99a83", 00:14:52.539 "strip_size_kb": 64, 00:14:52.539 "state": "offline", 00:14:52.539 "raid_level": "raid0", 00:14:52.539 "superblock": true, 00:14:52.539 "num_base_bdevs": 2, 00:14:52.539 "num_base_bdevs_discovered": 1, 00:14:52.539 "num_base_bdevs_operational": 1, 00:14:52.540 "base_bdevs_list": [ 00:14:52.540 { 00:14:52.540 "name": null, 00:14:52.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.540 "is_configured": false, 00:14:52.540 "data_offset": 2048, 00:14:52.540 "data_size": 63488 00:14:52.540 }, 00:14:52.540 { 00:14:52.540 "name": "BaseBdev2", 00:14:52.540 "uuid": "72eb57f8-0d7f-4d23-aefc-9667cab241e4", 00:14:52.540 "is_configured": true, 00:14:52.540 "data_offset": 2048, 00:14:52.540 "data_size": 63488 00:14:52.540 } 00:14:52.540 ] 00:14:52.540 }' 00:14:52.540 16:53:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:52.540 16:53:41 -- common/autotest_common.sh@10 -- # set +x 00:14:53.475 16:53:42 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:53.475 16:53:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:53.475 16:53:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.475 16:53:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:53.475 16:53:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:53.475 16:53:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:53.475 16:53:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:53.734 [2024-11-05 16:53:42.536683] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:53.734 [2024-11-05 16:53:42.536765] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:14:53.734 16:53:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:53.734 16:53:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:53.734 16:53:42 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.734 16:53:42 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:53.994 16:53:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:53.994 16:53:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:53.994 16:53:42 -- bdev/bdev_raid.sh@287 -- # killprocess 112366 00:14:53.994 16:53:42 -- common/autotest_common.sh@936 -- # '[' -z 112366 ']' 00:14:53.994 16:53:42 -- common/autotest_common.sh@940 -- # kill -0 112366 00:14:53.994 16:53:42 -- common/autotest_common.sh@941 -- # uname 00:14:53.994 16:53:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:53.994 16:53:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112366 00:14:54.274 16:53:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:54.274 16:53:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:54.274 16:53:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112366' 00:14:54.274 killing process with pid 112366 00:14:54.274 16:53:42 -- common/autotest_common.sh@955 -- # kill 112366 00:14:54.274 [2024-11-05 16:53:42.889469] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:54.274 [2024-11-05 16:53:42.889590] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.274 16:53:42 -- common/autotest_common.sh@960 -- # wait 112366 00:14:55.209 ************************************ 00:14:55.209 END TEST raid_state_function_test_sb 00:14:55.209 ************************************ 00:14:55.209 16:53:43 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:55.209 00:14:55.209 real 0m10.788s 00:14:55.209 user 0m18.747s 00:14:55.209 sys 0m1.352s 00:14:55.209 16:53:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:55.209 16:53:43 -- common/autotest_common.sh@10 -- # set +x 00:14:55.209 16:53:43 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:55.209 16:53:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:55.209 16:53:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:55.209 16:53:43 -- common/autotest_common.sh@10 -- # set +x 00:14:55.209 ************************************ 00:14:55.209 START TEST raid_superblock_test 00:14:55.209 ************************************ 00:14:55.210 16:53:43 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 2 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@357 -- # raid_pid=112696 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@358 -- # waitforlisten 112696 /var/tmp/spdk-raid.sock 00:14:55.210 16:53:43 -- common/autotest_common.sh@829 -- # '[' -z 112696 ']' 00:14:55.210 16:53:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:55.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:55.210 16:53:43 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:55.210 16:53:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.210 16:53:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:55.210 16:53:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.210 16:53:43 -- common/autotest_common.sh@10 -- # set +x 00:14:55.210 [2024-11-05 16:53:43.987124] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:55.210 [2024-11-05 16:53:43.987942] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112696 ] 00:14:55.469 [2024-11-05 16:53:44.160453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.469 [2024-11-05 16:53:44.341711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.727 [2024-11-05 16:53:44.516340] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.294 16:53:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:56.294 16:53:44 -- common/autotest_common.sh@862 -- # return 0 00:14:56.294 16:53:44 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:56.294 16:53:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:56.294 16:53:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:56.294 16:53:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:56.294 16:53:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:56.294 16:53:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:56.294 16:53:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:56.294 16:53:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:56.294 16:53:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:56.552 malloc1 00:14:56.553 16:53:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:56.553 [2024-11-05 16:53:45.394151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:56.553 [2024-11-05 16:53:45.394277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.553 [2024-11-05 16:53:45.394322] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:14:56.553 [2024-11-05 16:53:45.394379] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.553 [2024-11-05 16:53:45.396742] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.553 [2024-11-05 16:53:45.396807] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:56.553 pt1 00:14:56.553 16:53:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:56.553 16:53:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:56.553 16:53:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:56.553 16:53:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:56.553 16:53:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:56.553 16:53:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:56.553 16:53:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:56.553 16:53:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:56.553 16:53:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:56.811 malloc2 00:14:56.811 16:53:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:57.069 [2024-11-05 16:53:45.830346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:57.069 [2024-11-05 16:53:45.830494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.069 [2024-11-05 16:53:45.830541] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:57.069 [2024-11-05 16:53:45.830603] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.069 [2024-11-05 16:53:45.833055] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.069 [2024-11-05 16:53:45.833126] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:57.069 pt2 00:14:57.069 16:53:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:57.069 16:53:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:57.069 16:53:45 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:57.328 [2024-11-05 16:53:46.030482] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:57.328 [2024-11-05 16:53:46.032551] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:57.328 [2024-11-05 16:53:46.032767] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:14:57.328 [2024-11-05 16:53:46.032783] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:57.328 [2024-11-05 16:53:46.032927] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:14:57.328 [2024-11-05 16:53:46.033280] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:14:57.328 [2024-11-05 16:53:46.033295] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:14:57.328 [2024-11-05 16:53:46.033443] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.328 16:53:46 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:57.328 16:53:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:57.328 16:53:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:57.328 16:53:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:57.328 16:53:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:57.328 16:53:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:57.328 16:53:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:57.328 16:53:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:57.328 16:53:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:57.328 16:53:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:57.328 16:53:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.328 16:53:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.586 16:53:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:57.586 "name": "raid_bdev1", 00:14:57.586 "uuid": "411b23c4-b532-468b-a7d2-0e99172c8e80", 00:14:57.586 "strip_size_kb": 64, 00:14:57.586 "state": "online", 00:14:57.586 "raid_level": "raid0", 00:14:57.586 "superblock": true, 00:14:57.586 "num_base_bdevs": 2, 00:14:57.586 "num_base_bdevs_discovered": 2, 00:14:57.586 "num_base_bdevs_operational": 2, 00:14:57.586 "base_bdevs_list": [ 00:14:57.586 { 00:14:57.586 "name": "pt1", 00:14:57.586 "uuid": "c4f5980c-16ee-57fc-82c7-f8e6f7218b27", 00:14:57.586 "is_configured": true, 00:14:57.586 "data_offset": 2048, 00:14:57.586 "data_size": 63488 00:14:57.586 }, 00:14:57.586 { 00:14:57.586 "name": "pt2", 00:14:57.586 "uuid": "c5d4626b-032a-588b-85e1-2c930b29220e", 00:14:57.586 "is_configured": true, 00:14:57.586 "data_offset": 2048, 00:14:57.586 "data_size": 63488 00:14:57.586 } 00:14:57.586 ] 00:14:57.586 }' 00:14:57.586 16:53:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:57.586 16:53:46 -- common/autotest_common.sh@10 -- # set +x 00:14:58.152 16:53:46 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:58.152 16:53:46 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:58.410 [2024-11-05 16:53:47.094907] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.410 16:53:47 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=411b23c4-b532-468b-a7d2-0e99172c8e80 00:14:58.410 16:53:47 -- bdev/bdev_raid.sh@380 -- # '[' -z 411b23c4-b532-468b-a7d2-0e99172c8e80 ']' 00:14:58.410 16:53:47 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:58.669 [2024-11-05 16:53:47.310728] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.669 [2024-11-05 16:53:47.310764] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.669 [2024-11-05 16:53:47.310866] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.669 [2024-11-05 16:53:47.310934] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.669 [2024-11-05 16:53:47.310947] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:14:58.669 16:53:47 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.669 16:53:47 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:58.669 16:53:47 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:58.669 16:53:47 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:58.669 16:53:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:58.669 16:53:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:58.927 16:53:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:58.927 16:53:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:59.186 16:53:48 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:59.186 16:53:48 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:59.444 16:53:48 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:59.445 16:53:48 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:59.445 16:53:48 -- common/autotest_common.sh@650 -- # local es=0 00:14:59.445 16:53:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:59.445 16:53:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.445 16:53:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.445 16:53:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.445 16:53:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.445 16:53:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.445 16:53:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.445 16:53:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.445 16:53:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:59.445 16:53:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:59.703 [2024-11-05 16:53:48.491378] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:59.703 [2024-11-05 16:53:48.493471] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:59.703 [2024-11-05 16:53:48.493566] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:59.703 [2024-11-05 16:53:48.493659] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:59.703 [2024-11-05 16:53:48.493698] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.703 [2024-11-05 16:53:48.493710] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:14:59.703 request: 00:14:59.703 { 00:14:59.703 "name": "raid_bdev1", 00:14:59.703 "raid_level": "raid0", 00:14:59.703 "base_bdevs": [ 00:14:59.703 "malloc1", 00:14:59.703 "malloc2" 00:14:59.703 ], 00:14:59.703 "superblock": false, 00:14:59.703 "strip_size_kb": 64, 00:14:59.703 "method": "bdev_raid_create", 00:14:59.703 "req_id": 1 00:14:59.703 } 00:14:59.703 Got JSON-RPC error response 00:14:59.703 response: 00:14:59.703 { 00:14:59.703 "code": -17, 00:14:59.703 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:59.703 } 00:14:59.703 16:53:48 -- common/autotest_common.sh@653 -- # es=1 00:14:59.703 16:53:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:59.703 16:53:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:59.703 16:53:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:59.703 16:53:48 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:59.703 16:53:48 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.961 16:53:48 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:59.961 16:53:48 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:59.961 16:53:48 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:00.225 [2024-11-05 16:53:48.911381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:00.225 [2024-11-05 16:53:48.911580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.225 [2024-11-05 16:53:48.911623] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:00.225 [2024-11-05 16:53:48.911664] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.225 [2024-11-05 16:53:48.914412] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.225 [2024-11-05 16:53:48.914502] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:00.225 [2024-11-05 16:53:48.914628] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:00.225 [2024-11-05 16:53:48.914721] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:00.225 pt1 00:15:00.225 16:53:48 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:15:00.225 16:53:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:00.225 16:53:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:00.225 16:53:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:00.225 16:53:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:00.225 16:53:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:00.225 16:53:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:00.225 16:53:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:00.225 16:53:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:00.225 16:53:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:00.225 16:53:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.225 16:53:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.494 16:53:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:00.494 "name": "raid_bdev1", 00:15:00.494 "uuid": "411b23c4-b532-468b-a7d2-0e99172c8e80", 00:15:00.494 "strip_size_kb": 64, 00:15:00.494 "state": "configuring", 00:15:00.494 "raid_level": "raid0", 00:15:00.494 "superblock": true, 00:15:00.494 "num_base_bdevs": 2, 00:15:00.494 "num_base_bdevs_discovered": 1, 00:15:00.494 "num_base_bdevs_operational": 2, 00:15:00.494 "base_bdevs_list": [ 00:15:00.494 { 00:15:00.494 "name": "pt1", 00:15:00.494 "uuid": "c4f5980c-16ee-57fc-82c7-f8e6f7218b27", 00:15:00.494 "is_configured": true, 00:15:00.494 "data_offset": 2048, 00:15:00.494 "data_size": 63488 00:15:00.494 }, 00:15:00.494 { 00:15:00.494 "name": null, 00:15:00.494 "uuid": "c5d4626b-032a-588b-85e1-2c930b29220e", 00:15:00.494 "is_configured": false, 00:15:00.494 "data_offset": 2048, 00:15:00.494 "data_size": 63488 00:15:00.494 } 00:15:00.494 ] 00:15:00.494 }' 00:15:00.494 16:53:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:00.494 16:53:49 -- common/autotest_common.sh@10 -- # set +x 00:15:01.075 16:53:49 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:01.075 16:53:49 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:01.075 16:53:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:01.075 16:53:49 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:01.075 [2024-11-05 16:53:49.947713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:01.075 [2024-11-05 16:53:49.947878] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.075 [2024-11-05 16:53:49.947920] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:01.075 [2024-11-05 16:53:49.947948] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.075 [2024-11-05 16:53:49.948624] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.075 [2024-11-05 16:53:49.948681] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:01.075 [2024-11-05 16:53:49.948787] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:01.075 [2024-11-05 16:53:49.948813] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:01.075 [2024-11-05 16:53:49.948932] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:15:01.075 [2024-11-05 16:53:49.948945] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:01.075 [2024-11-05 16:53:49.949065] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:01.075 [2024-11-05 16:53:49.949411] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:15:01.075 [2024-11-05 16:53:49.949437] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:15:01.075 [2024-11-05 16:53:49.949628] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.075 pt2 00:15:01.075 16:53:49 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:01.075 16:53:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:01.075 16:53:49 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:01.075 16:53:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:01.075 16:53:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:01.075 16:53:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:01.075 16:53:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:01.334 16:53:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:01.334 16:53:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:01.334 16:53:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:01.334 16:53:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:01.334 16:53:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:01.334 16:53:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.334 16:53:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.334 16:53:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:01.334 "name": "raid_bdev1", 00:15:01.334 "uuid": "411b23c4-b532-468b-a7d2-0e99172c8e80", 00:15:01.334 "strip_size_kb": 64, 00:15:01.334 "state": "online", 00:15:01.334 "raid_level": "raid0", 00:15:01.334 "superblock": true, 00:15:01.334 "num_base_bdevs": 2, 00:15:01.334 "num_base_bdevs_discovered": 2, 00:15:01.334 "num_base_bdevs_operational": 2, 00:15:01.334 "base_bdevs_list": [ 00:15:01.334 { 00:15:01.334 "name": "pt1", 00:15:01.334 "uuid": "c4f5980c-16ee-57fc-82c7-f8e6f7218b27", 00:15:01.334 "is_configured": true, 00:15:01.334 "data_offset": 2048, 00:15:01.334 "data_size": 63488 00:15:01.334 }, 00:15:01.334 { 00:15:01.334 "name": "pt2", 00:15:01.334 "uuid": "c5d4626b-032a-588b-85e1-2c930b29220e", 00:15:01.334 "is_configured": true, 00:15:01.334 "data_offset": 2048, 00:15:01.334 "data_size": 63488 00:15:01.334 } 00:15:01.334 ] 00:15:01.334 }' 00:15:01.334 16:53:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:01.334 16:53:50 -- common/autotest_common.sh@10 -- # set +x 00:15:02.271 16:53:50 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:02.271 16:53:50 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:02.271 [2024-11-05 16:53:51.040171] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.271 16:53:51 -- bdev/bdev_raid.sh@430 -- # '[' 411b23c4-b532-468b-a7d2-0e99172c8e80 '!=' 411b23c4-b532-468b-a7d2-0e99172c8e80 ']' 00:15:02.271 16:53:51 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:02.271 16:53:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:02.271 16:53:51 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:02.271 16:53:51 -- bdev/bdev_raid.sh@511 -- # killprocess 112696 00:15:02.271 16:53:51 -- common/autotest_common.sh@936 -- # '[' -z 112696 ']' 00:15:02.271 16:53:51 -- common/autotest_common.sh@940 -- # kill -0 112696 00:15:02.271 16:53:51 -- common/autotest_common.sh@941 -- # uname 00:15:02.271 16:53:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:02.271 16:53:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112696 00:15:02.271 killing process with pid 112696 00:15:02.271 16:53:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:02.271 16:53:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:02.271 16:53:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112696' 00:15:02.271 16:53:51 -- common/autotest_common.sh@955 -- # kill 112696 00:15:02.271 [2024-11-05 16:53:51.081764] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.271 16:53:51 -- common/autotest_common.sh@960 -- # wait 112696 00:15:02.271 [2024-11-05 16:53:51.081828] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.271 [2024-11-05 16:53:51.081877] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.271 [2024-11-05 16:53:51.081887] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:15:02.530 [2024-11-05 16:53:51.221919] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:03.469 ************************************ 00:15:03.469 END TEST raid_superblock_test 00:15:03.469 ************************************ 00:15:03.469 16:53:52 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:03.469 00:15:03.469 real 0m8.269s 00:15:03.469 user 0m14.103s 00:15:03.470 sys 0m1.030s 00:15:03.470 16:53:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:03.470 16:53:52 -- common/autotest_common.sh@10 -- # set +x 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:15:03.470 16:53:52 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:03.470 16:53:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:03.470 16:53:52 -- common/autotest_common.sh@10 -- # set +x 00:15:03.470 ************************************ 00:15:03.470 START TEST raid_state_function_test 00:15:03.470 ************************************ 00:15:03.470 16:53:52 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 false 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@226 -- # raid_pid=112947 00:15:03.470 Process raid pid: 112947 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 112947' 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@228 -- # waitforlisten 112947 /var/tmp/spdk-raid.sock 00:15:03.470 16:53:52 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:03.470 16:53:52 -- common/autotest_common.sh@829 -- # '[' -z 112947 ']' 00:15:03.470 16:53:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:03.470 16:53:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:03.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:03.470 16:53:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:03.470 16:53:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:03.470 16:53:52 -- common/autotest_common.sh@10 -- # set +x 00:15:03.470 [2024-11-05 16:53:52.317123] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:03.470 [2024-11-05 16:53:52.317924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.728 [2024-11-05 16:53:52.491984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.987 [2024-11-05 16:53:52.720138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.246 [2024-11-05 16:53:52.904005] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.506 16:53:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.506 16:53:53 -- common/autotest_common.sh@862 -- # return 0 00:15:04.506 16:53:53 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:04.766 [2024-11-05 16:53:53.457089] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:04.766 [2024-11-05 16:53:53.457342] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:04.766 [2024-11-05 16:53:53.457475] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:04.766 [2024-11-05 16:53:53.457596] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:04.766 16:53:53 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:04.766 16:53:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:04.766 16:53:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:04.766 16:53:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:04.766 16:53:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:04.766 16:53:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:04.766 16:53:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:04.766 16:53:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:04.766 16:53:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:04.766 16:53:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:04.766 16:53:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.766 16:53:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.026 16:53:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:05.026 "name": "Existed_Raid", 00:15:05.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.026 "strip_size_kb": 64, 00:15:05.026 "state": "configuring", 00:15:05.026 "raid_level": "concat", 00:15:05.026 "superblock": false, 00:15:05.026 "num_base_bdevs": 2, 00:15:05.026 "num_base_bdevs_discovered": 0, 00:15:05.026 "num_base_bdevs_operational": 2, 00:15:05.026 "base_bdevs_list": [ 00:15:05.026 { 00:15:05.026 "name": "BaseBdev1", 00:15:05.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.026 "is_configured": false, 00:15:05.026 "data_offset": 0, 00:15:05.026 "data_size": 0 00:15:05.026 }, 00:15:05.026 { 00:15:05.026 "name": "BaseBdev2", 00:15:05.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.026 "is_configured": false, 00:15:05.026 "data_offset": 0, 00:15:05.026 "data_size": 0 00:15:05.026 } 00:15:05.026 ] 00:15:05.026 }' 00:15:05.026 16:53:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:05.026 16:53:53 -- common/autotest_common.sh@10 -- # set +x 00:15:05.594 16:53:54 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:05.853 [2024-11-05 16:53:54.537218] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:05.853 [2024-11-05 16:53:54.537453] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:05.853 16:53:54 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:05.853 [2024-11-05 16:53:54.733263] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:05.853 [2024-11-05 16:53:54.733509] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:05.853 [2024-11-05 16:53:54.733623] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:05.854 [2024-11-05 16:53:54.733689] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:06.113 16:53:54 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:06.113 [2024-11-05 16:53:54.965246] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.113 BaseBdev1 00:15:06.113 16:53:54 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:06.113 16:53:54 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:06.113 16:53:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:06.113 16:53:54 -- common/autotest_common.sh@899 -- # local i 00:15:06.113 16:53:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:06.113 16:53:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:06.113 16:53:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:06.372 16:53:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:06.631 [ 00:15:06.631 { 00:15:06.631 "name": "BaseBdev1", 00:15:06.631 "aliases": [ 00:15:06.631 "36b0725d-efef-4347-93f0-0c24de9f6ab0" 00:15:06.631 ], 00:15:06.631 "product_name": "Malloc disk", 00:15:06.631 "block_size": 512, 00:15:06.631 "num_blocks": 65536, 00:15:06.631 "uuid": "36b0725d-efef-4347-93f0-0c24de9f6ab0", 00:15:06.631 "assigned_rate_limits": { 00:15:06.631 "rw_ios_per_sec": 0, 00:15:06.631 "rw_mbytes_per_sec": 0, 00:15:06.631 "r_mbytes_per_sec": 0, 00:15:06.631 "w_mbytes_per_sec": 0 00:15:06.631 }, 00:15:06.631 "claimed": true, 00:15:06.631 "claim_type": "exclusive_write", 00:15:06.631 "zoned": false, 00:15:06.631 "supported_io_types": { 00:15:06.631 "read": true, 00:15:06.631 "write": true, 00:15:06.631 "unmap": true, 00:15:06.631 "write_zeroes": true, 00:15:06.631 "flush": true, 00:15:06.631 "reset": true, 00:15:06.631 "compare": false, 00:15:06.631 "compare_and_write": false, 00:15:06.631 "abort": true, 00:15:06.631 "nvme_admin": false, 00:15:06.631 "nvme_io": false 00:15:06.631 }, 00:15:06.631 "memory_domains": [ 00:15:06.631 { 00:15:06.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.631 "dma_device_type": 2 00:15:06.631 } 00:15:06.631 ], 00:15:06.631 "driver_specific": {} 00:15:06.631 } 00:15:06.631 ] 00:15:06.631 16:53:55 -- common/autotest_common.sh@905 -- # return 0 00:15:06.631 16:53:55 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:06.631 16:53:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:06.631 16:53:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:06.631 16:53:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:06.631 16:53:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:06.631 16:53:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:06.631 16:53:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:06.631 16:53:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:06.631 16:53:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:06.631 16:53:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:06.631 16:53:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.631 16:53:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.890 16:53:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:06.890 "name": "Existed_Raid", 00:15:06.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.890 "strip_size_kb": 64, 00:15:06.890 "state": "configuring", 00:15:06.891 "raid_level": "concat", 00:15:06.891 "superblock": false, 00:15:06.891 "num_base_bdevs": 2, 00:15:06.891 "num_base_bdevs_discovered": 1, 00:15:06.891 "num_base_bdevs_operational": 2, 00:15:06.891 "base_bdevs_list": [ 00:15:06.891 { 00:15:06.891 "name": "BaseBdev1", 00:15:06.891 "uuid": "36b0725d-efef-4347-93f0-0c24de9f6ab0", 00:15:06.891 "is_configured": true, 00:15:06.891 "data_offset": 0, 00:15:06.891 "data_size": 65536 00:15:06.891 }, 00:15:06.891 { 00:15:06.891 "name": "BaseBdev2", 00:15:06.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.891 "is_configured": false, 00:15:06.891 "data_offset": 0, 00:15:06.891 "data_size": 0 00:15:06.891 } 00:15:06.891 ] 00:15:06.891 }' 00:15:06.891 16:53:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:06.891 16:53:55 -- common/autotest_common.sh@10 -- # set +x 00:15:07.459 16:53:56 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:07.718 [2024-11-05 16:53:56.529645] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:07.718 [2024-11-05 16:53:56.529852] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:07.718 16:53:56 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:07.718 16:53:56 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:07.977 [2024-11-05 16:53:56.769719] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:07.977 [2024-11-05 16:53:56.771636] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:07.977 [2024-11-05 16:53:56.771849] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:07.977 16:53:56 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:07.977 16:53:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:07.977 16:53:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:07.977 16:53:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:07.977 16:53:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:07.977 16:53:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:07.977 16:53:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:07.977 16:53:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:07.977 16:53:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:07.977 16:53:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:07.977 16:53:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:07.977 16:53:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:07.977 16:53:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.977 16:53:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.237 16:53:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:08.237 "name": "Existed_Raid", 00:15:08.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.237 "strip_size_kb": 64, 00:15:08.237 "state": "configuring", 00:15:08.237 "raid_level": "concat", 00:15:08.237 "superblock": false, 00:15:08.237 "num_base_bdevs": 2, 00:15:08.237 "num_base_bdevs_discovered": 1, 00:15:08.237 "num_base_bdevs_operational": 2, 00:15:08.237 "base_bdevs_list": [ 00:15:08.237 { 00:15:08.237 "name": "BaseBdev1", 00:15:08.237 "uuid": "36b0725d-efef-4347-93f0-0c24de9f6ab0", 00:15:08.237 "is_configured": true, 00:15:08.237 "data_offset": 0, 00:15:08.237 "data_size": 65536 00:15:08.237 }, 00:15:08.237 { 00:15:08.237 "name": "BaseBdev2", 00:15:08.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.237 "is_configured": false, 00:15:08.237 "data_offset": 0, 00:15:08.237 "data_size": 0 00:15:08.237 } 00:15:08.237 ] 00:15:08.237 }' 00:15:08.237 16:53:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:08.237 16:53:56 -- common/autotest_common.sh@10 -- # set +x 00:15:08.806 16:53:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:09.066 [2024-11-05 16:53:57.819843] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.066 [2024-11-05 16:53:57.820185] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:15:09.066 [2024-11-05 16:53:57.820233] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:09.066 [2024-11-05 16:53:57.820444] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:09.066 [2024-11-05 16:53:57.820931] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:15:09.066 [2024-11-05 16:53:57.821095] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:15:09.066 [2024-11-05 16:53:57.821499] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.066 BaseBdev2 00:15:09.066 16:53:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:09.066 16:53:57 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:09.066 16:53:57 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:09.066 16:53:57 -- common/autotest_common.sh@899 -- # local i 00:15:09.066 16:53:57 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:09.066 16:53:57 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:09.066 16:53:57 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:09.326 16:53:58 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:09.585 [ 00:15:09.585 { 00:15:09.585 "name": "BaseBdev2", 00:15:09.585 "aliases": [ 00:15:09.585 "a695230c-7d92-4b45-8f13-7b47a83c5b31" 00:15:09.585 ], 00:15:09.585 "product_name": "Malloc disk", 00:15:09.585 "block_size": 512, 00:15:09.586 "num_blocks": 65536, 00:15:09.586 "uuid": "a695230c-7d92-4b45-8f13-7b47a83c5b31", 00:15:09.586 "assigned_rate_limits": { 00:15:09.586 "rw_ios_per_sec": 0, 00:15:09.586 "rw_mbytes_per_sec": 0, 00:15:09.586 "r_mbytes_per_sec": 0, 00:15:09.586 "w_mbytes_per_sec": 0 00:15:09.586 }, 00:15:09.586 "claimed": true, 00:15:09.586 "claim_type": "exclusive_write", 00:15:09.586 "zoned": false, 00:15:09.586 "supported_io_types": { 00:15:09.586 "read": true, 00:15:09.586 "write": true, 00:15:09.586 "unmap": true, 00:15:09.586 "write_zeroes": true, 00:15:09.586 "flush": true, 00:15:09.586 "reset": true, 00:15:09.586 "compare": false, 00:15:09.586 "compare_and_write": false, 00:15:09.586 "abort": true, 00:15:09.586 "nvme_admin": false, 00:15:09.586 "nvme_io": false 00:15:09.586 }, 00:15:09.586 "memory_domains": [ 00:15:09.586 { 00:15:09.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.586 "dma_device_type": 2 00:15:09.586 } 00:15:09.586 ], 00:15:09.586 "driver_specific": {} 00:15:09.586 } 00:15:09.586 ] 00:15:09.586 16:53:58 -- common/autotest_common.sh@905 -- # return 0 00:15:09.586 16:53:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:09.586 16:53:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:09.586 16:53:58 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:09.586 16:53:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:09.586 16:53:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:09.586 16:53:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:09.586 16:53:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:09.586 16:53:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:09.586 16:53:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:09.586 16:53:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:09.586 16:53:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:09.586 16:53:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:09.586 16:53:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.586 16:53:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.586 16:53:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:09.586 "name": "Existed_Raid", 00:15:09.586 "uuid": "d9870b58-1681-4d7a-9b8a-34ae014a55e7", 00:15:09.586 "strip_size_kb": 64, 00:15:09.586 "state": "online", 00:15:09.586 "raid_level": "concat", 00:15:09.586 "superblock": false, 00:15:09.586 "num_base_bdevs": 2, 00:15:09.586 "num_base_bdevs_discovered": 2, 00:15:09.586 "num_base_bdevs_operational": 2, 00:15:09.586 "base_bdevs_list": [ 00:15:09.586 { 00:15:09.586 "name": "BaseBdev1", 00:15:09.586 "uuid": "36b0725d-efef-4347-93f0-0c24de9f6ab0", 00:15:09.586 "is_configured": true, 00:15:09.586 "data_offset": 0, 00:15:09.586 "data_size": 65536 00:15:09.586 }, 00:15:09.586 { 00:15:09.586 "name": "BaseBdev2", 00:15:09.586 "uuid": "a695230c-7d92-4b45-8f13-7b47a83c5b31", 00:15:09.586 "is_configured": true, 00:15:09.586 "data_offset": 0, 00:15:09.586 "data_size": 65536 00:15:09.586 } 00:15:09.586 ] 00:15:09.586 }' 00:15:09.586 16:53:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:09.586 16:53:58 -- common/autotest_common.sh@10 -- # set +x 00:15:10.524 16:53:59 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:10.524 [2024-11-05 16:53:59.343955] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:10.524 [2024-11-05 16:53:59.344224] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:10.524 [2024-11-05 16:53:59.344404] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.783 16:53:59 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:10.783 16:53:59 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:10.783 16:53:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:10.783 16:53:59 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:10.783 16:53:59 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:10.783 16:53:59 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:10.783 16:53:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:10.783 16:53:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:10.783 16:53:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:10.783 16:53:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:10.783 16:53:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:10.783 16:53:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:10.783 16:53:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:10.783 16:53:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:10.783 16:53:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:10.783 16:53:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.783 16:53:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.041 16:53:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:11.041 "name": "Existed_Raid", 00:15:11.041 "uuid": "d9870b58-1681-4d7a-9b8a-34ae014a55e7", 00:15:11.041 "strip_size_kb": 64, 00:15:11.041 "state": "offline", 00:15:11.041 "raid_level": "concat", 00:15:11.041 "superblock": false, 00:15:11.041 "num_base_bdevs": 2, 00:15:11.041 "num_base_bdevs_discovered": 1, 00:15:11.041 "num_base_bdevs_operational": 1, 00:15:11.041 "base_bdevs_list": [ 00:15:11.041 { 00:15:11.041 "name": null, 00:15:11.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.041 "is_configured": false, 00:15:11.041 "data_offset": 0, 00:15:11.041 "data_size": 65536 00:15:11.041 }, 00:15:11.041 { 00:15:11.041 "name": "BaseBdev2", 00:15:11.041 "uuid": "a695230c-7d92-4b45-8f13-7b47a83c5b31", 00:15:11.041 "is_configured": true, 00:15:11.041 "data_offset": 0, 00:15:11.041 "data_size": 65536 00:15:11.041 } 00:15:11.041 ] 00:15:11.041 }' 00:15:11.041 16:53:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:11.041 16:53:59 -- common/autotest_common.sh@10 -- # set +x 00:15:11.610 16:54:00 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:11.610 16:54:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:11.610 16:54:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.610 16:54:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:11.869 16:54:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:11.869 16:54:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:11.869 16:54:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:12.171 [2024-11-05 16:54:00.831504] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:12.171 [2024-11-05 16:54:00.831839] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:15:12.171 16:54:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:12.171 16:54:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:12.171 16:54:00 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.171 16:54:00 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:12.436 16:54:01 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:12.436 16:54:01 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:12.436 16:54:01 -- bdev/bdev_raid.sh@287 -- # killprocess 112947 00:15:12.436 16:54:01 -- common/autotest_common.sh@936 -- # '[' -z 112947 ']' 00:15:12.436 16:54:01 -- common/autotest_common.sh@940 -- # kill -0 112947 00:15:12.436 16:54:01 -- common/autotest_common.sh@941 -- # uname 00:15:12.436 16:54:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.437 16:54:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112947 00:15:12.437 16:54:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:12.437 16:54:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:12.437 16:54:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112947' 00:15:12.437 killing process with pid 112947 00:15:12.437 16:54:01 -- common/autotest_common.sh@955 -- # kill 112947 00:15:12.437 16:54:01 -- common/autotest_common.sh@960 -- # wait 112947 00:15:12.437 [2024-11-05 16:54:01.152231] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.437 [2024-11-05 16:54:01.152348] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.372 ************************************ 00:15:13.372 END TEST raid_state_function_test 00:15:13.372 ************************************ 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:13.372 00:15:13.372 real 0m9.941s 00:15:13.372 user 0m17.169s 00:15:13.372 sys 0m1.235s 00:15:13.372 16:54:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:13.372 16:54:02 -- common/autotest_common.sh@10 -- # set +x 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:15:13.372 16:54:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:13.372 16:54:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:13.372 16:54:02 -- common/autotest_common.sh@10 -- # set +x 00:15:13.372 ************************************ 00:15:13.372 START TEST raid_state_function_test_sb 00:15:13.372 ************************************ 00:15:13.372 16:54:02 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 true 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:13.372 16:54:02 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:13.373 16:54:02 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:13.373 16:54:02 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:13.373 16:54:02 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:13.373 16:54:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=113261 00:15:13.373 16:54:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:13.373 16:54:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 113261' 00:15:13.373 Process raid pid: 113261 00:15:13.373 16:54:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 113261 /var/tmp/spdk-raid.sock 00:15:13.373 16:54:02 -- common/autotest_common.sh@829 -- # '[' -z 113261 ']' 00:15:13.373 16:54:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:13.373 16:54:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.373 16:54:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:13.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:13.373 16:54:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.373 16:54:02 -- common/autotest_common.sh@10 -- # set +x 00:15:13.632 [2024-11-05 16:54:02.328044] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:13.632 [2024-11-05 16:54:02.329081] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.632 [2024-11-05 16:54:02.504173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.890 [2024-11-05 16:54:02.734307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.149 [2024-11-05 16:54:02.923923] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.408 16:54:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.408 16:54:03 -- common/autotest_common.sh@862 -- # return 0 00:15:14.408 16:54:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:14.667 [2024-11-05 16:54:03.532702] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:14.667 [2024-11-05 16:54:03.532971] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:14.667 [2024-11-05 16:54:03.533086] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.667 [2024-11-05 16:54:03.533201] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.667 16:54:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:14.667 16:54:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:14.667 16:54:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:14.667 16:54:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:14.667 16:54:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:14.667 16:54:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:14.667 16:54:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:14.667 16:54:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:14.667 16:54:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:14.667 16:54:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:14.667 16:54:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.667 16:54:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.926 16:54:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:14.926 "name": "Existed_Raid", 00:15:14.926 "uuid": "a11a6b62-7e59-48e7-87c3-eeb393f67c11", 00:15:14.926 "strip_size_kb": 64, 00:15:14.926 "state": "configuring", 00:15:14.926 "raid_level": "concat", 00:15:14.926 "superblock": true, 00:15:14.926 "num_base_bdevs": 2, 00:15:14.926 "num_base_bdevs_discovered": 0, 00:15:14.926 "num_base_bdevs_operational": 2, 00:15:14.926 "base_bdevs_list": [ 00:15:14.926 { 00:15:14.926 "name": "BaseBdev1", 00:15:14.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.926 "is_configured": false, 00:15:14.926 "data_offset": 0, 00:15:14.926 "data_size": 0 00:15:14.926 }, 00:15:14.926 { 00:15:14.926 "name": "BaseBdev2", 00:15:14.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.926 "is_configured": false, 00:15:14.926 "data_offset": 0, 00:15:14.926 "data_size": 0 00:15:14.926 } 00:15:14.926 ] 00:15:14.926 }' 00:15:14.926 16:54:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:14.926 16:54:03 -- common/autotest_common.sh@10 -- # set +x 00:15:15.864 16:54:04 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:15.864 [2024-11-05 16:54:04.584831] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:15.864 [2024-11-05 16:54:04.585045] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:15.864 16:54:04 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:16.123 [2024-11-05 16:54:04.840906] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:16.123 [2024-11-05 16:54:04.841135] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:16.123 [2024-11-05 16:54:04.841292] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:16.123 [2024-11-05 16:54:04.841440] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:16.123 16:54:04 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:16.382 [2024-11-05 16:54:05.115393] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.382 BaseBdev1 00:15:16.382 16:54:05 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:16.382 16:54:05 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:16.382 16:54:05 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:16.382 16:54:05 -- common/autotest_common.sh@899 -- # local i 00:15:16.382 16:54:05 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:16.382 16:54:05 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:16.382 16:54:05 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:16.641 16:54:05 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:16.900 [ 00:15:16.900 { 00:15:16.900 "name": "BaseBdev1", 00:15:16.900 "aliases": [ 00:15:16.900 "493b81d7-0399-4902-93a5-684714bcc606" 00:15:16.900 ], 00:15:16.900 "product_name": "Malloc disk", 00:15:16.900 "block_size": 512, 00:15:16.900 "num_blocks": 65536, 00:15:16.900 "uuid": "493b81d7-0399-4902-93a5-684714bcc606", 00:15:16.900 "assigned_rate_limits": { 00:15:16.900 "rw_ios_per_sec": 0, 00:15:16.900 "rw_mbytes_per_sec": 0, 00:15:16.900 "r_mbytes_per_sec": 0, 00:15:16.900 "w_mbytes_per_sec": 0 00:15:16.900 }, 00:15:16.900 "claimed": true, 00:15:16.900 "claim_type": "exclusive_write", 00:15:16.900 "zoned": false, 00:15:16.900 "supported_io_types": { 00:15:16.900 "read": true, 00:15:16.900 "write": true, 00:15:16.900 "unmap": true, 00:15:16.900 "write_zeroes": true, 00:15:16.900 "flush": true, 00:15:16.900 "reset": true, 00:15:16.900 "compare": false, 00:15:16.900 "compare_and_write": false, 00:15:16.900 "abort": true, 00:15:16.900 "nvme_admin": false, 00:15:16.900 "nvme_io": false 00:15:16.900 }, 00:15:16.900 "memory_domains": [ 00:15:16.900 { 00:15:16.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.900 "dma_device_type": 2 00:15:16.900 } 00:15:16.900 ], 00:15:16.900 "driver_specific": {} 00:15:16.900 } 00:15:16.900 ] 00:15:16.900 16:54:05 -- common/autotest_common.sh@905 -- # return 0 00:15:16.900 16:54:05 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:16.900 16:54:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:16.900 16:54:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:16.900 16:54:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:16.900 16:54:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:16.900 16:54:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:16.901 16:54:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:16.901 16:54:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:16.901 16:54:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:16.901 16:54:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:16.901 16:54:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.901 16:54:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.159 16:54:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:17.159 "name": "Existed_Raid", 00:15:17.159 "uuid": "bcbed692-2146-4347-88f9-98313ddc90e5", 00:15:17.159 "strip_size_kb": 64, 00:15:17.159 "state": "configuring", 00:15:17.159 "raid_level": "concat", 00:15:17.159 "superblock": true, 00:15:17.159 "num_base_bdevs": 2, 00:15:17.159 "num_base_bdevs_discovered": 1, 00:15:17.159 "num_base_bdevs_operational": 2, 00:15:17.159 "base_bdevs_list": [ 00:15:17.159 { 00:15:17.159 "name": "BaseBdev1", 00:15:17.159 "uuid": "493b81d7-0399-4902-93a5-684714bcc606", 00:15:17.159 "is_configured": true, 00:15:17.159 "data_offset": 2048, 00:15:17.159 "data_size": 63488 00:15:17.159 }, 00:15:17.159 { 00:15:17.159 "name": "BaseBdev2", 00:15:17.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.159 "is_configured": false, 00:15:17.159 "data_offset": 0, 00:15:17.159 "data_size": 0 00:15:17.159 } 00:15:17.159 ] 00:15:17.159 }' 00:15:17.159 16:54:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:17.159 16:54:05 -- common/autotest_common.sh@10 -- # set +x 00:15:17.744 16:54:06 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:17.745 [2024-11-05 16:54:06.575784] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:17.745 [2024-11-05 16:54:06.575977] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:17.745 16:54:06 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:17.745 16:54:06 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:18.313 16:54:06 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:18.313 BaseBdev1 00:15:18.572 16:54:07 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:18.572 16:54:07 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:18.572 16:54:07 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:18.572 16:54:07 -- common/autotest_common.sh@899 -- # local i 00:15:18.572 16:54:07 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:18.572 16:54:07 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:18.572 16:54:07 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:18.572 16:54:07 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:18.831 [ 00:15:18.831 { 00:15:18.831 "name": "BaseBdev1", 00:15:18.831 "aliases": [ 00:15:18.831 "5f74af7f-9563-4858-ad76-ddeac11f23df" 00:15:18.831 ], 00:15:18.831 "product_name": "Malloc disk", 00:15:18.831 "block_size": 512, 00:15:18.831 "num_blocks": 65536, 00:15:18.831 "uuid": "5f74af7f-9563-4858-ad76-ddeac11f23df", 00:15:18.831 "assigned_rate_limits": { 00:15:18.831 "rw_ios_per_sec": 0, 00:15:18.831 "rw_mbytes_per_sec": 0, 00:15:18.831 "r_mbytes_per_sec": 0, 00:15:18.831 "w_mbytes_per_sec": 0 00:15:18.831 }, 00:15:18.831 "claimed": false, 00:15:18.831 "zoned": false, 00:15:18.831 "supported_io_types": { 00:15:18.831 "read": true, 00:15:18.831 "write": true, 00:15:18.831 "unmap": true, 00:15:18.831 "write_zeroes": true, 00:15:18.831 "flush": true, 00:15:18.831 "reset": true, 00:15:18.831 "compare": false, 00:15:18.831 "compare_and_write": false, 00:15:18.831 "abort": true, 00:15:18.831 "nvme_admin": false, 00:15:18.831 "nvme_io": false 00:15:18.831 }, 00:15:18.831 "memory_domains": [ 00:15:18.831 { 00:15:18.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.831 "dma_device_type": 2 00:15:18.831 } 00:15:18.831 ], 00:15:18.831 "driver_specific": {} 00:15:18.831 } 00:15:18.831 ] 00:15:18.831 16:54:07 -- common/autotest_common.sh@905 -- # return 0 00:15:18.832 16:54:07 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:19.090 [2024-11-05 16:54:07.919338] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.090 [2024-11-05 16:54:07.921459] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.090 [2024-11-05 16:54:07.921663] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.090 16:54:07 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:19.090 16:54:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:19.090 16:54:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:19.090 16:54:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:19.090 16:54:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:19.090 16:54:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:19.090 16:54:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:19.090 16:54:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:19.090 16:54:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:19.090 16:54:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:19.090 16:54:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:19.090 16:54:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:19.090 16:54:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.090 16:54:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.348 16:54:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:19.348 "name": "Existed_Raid", 00:15:19.348 "uuid": "4e9fe199-49e9-419c-8a52-594dc8200569", 00:15:19.348 "strip_size_kb": 64, 00:15:19.348 "state": "configuring", 00:15:19.348 "raid_level": "concat", 00:15:19.348 "superblock": true, 00:15:19.348 "num_base_bdevs": 2, 00:15:19.348 "num_base_bdevs_discovered": 1, 00:15:19.348 "num_base_bdevs_operational": 2, 00:15:19.348 "base_bdevs_list": [ 00:15:19.348 { 00:15:19.348 "name": "BaseBdev1", 00:15:19.348 "uuid": "5f74af7f-9563-4858-ad76-ddeac11f23df", 00:15:19.348 "is_configured": true, 00:15:19.348 "data_offset": 2048, 00:15:19.348 "data_size": 63488 00:15:19.348 }, 00:15:19.348 { 00:15:19.348 "name": "BaseBdev2", 00:15:19.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.348 "is_configured": false, 00:15:19.348 "data_offset": 0, 00:15:19.348 "data_size": 0 00:15:19.348 } 00:15:19.348 ] 00:15:19.348 }' 00:15:19.348 16:54:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:19.348 16:54:08 -- common/autotest_common.sh@10 -- # set +x 00:15:19.914 16:54:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:20.173 [2024-11-05 16:54:09.030160] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.173 [2024-11-05 16:54:09.030720] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:15:20.173 [2024-11-05 16:54:09.030935] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:20.173 [2024-11-05 16:54:09.031220] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:20.173 BaseBdev2 00:15:20.173 [2024-11-05 16:54:09.031861] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:15:20.173 [2024-11-05 16:54:09.031984] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:15:20.173 [2024-11-05 16:54:09.032251] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.173 16:54:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:20.173 16:54:09 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:20.173 16:54:09 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:20.173 16:54:09 -- common/autotest_common.sh@899 -- # local i 00:15:20.173 16:54:09 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:20.173 16:54:09 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:20.173 16:54:09 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:20.431 16:54:09 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:20.690 [ 00:15:20.690 { 00:15:20.690 "name": "BaseBdev2", 00:15:20.690 "aliases": [ 00:15:20.690 "9061da14-d2e9-40af-a318-0cc49ee3ddf3" 00:15:20.690 ], 00:15:20.690 "product_name": "Malloc disk", 00:15:20.690 "block_size": 512, 00:15:20.690 "num_blocks": 65536, 00:15:20.690 "uuid": "9061da14-d2e9-40af-a318-0cc49ee3ddf3", 00:15:20.690 "assigned_rate_limits": { 00:15:20.690 "rw_ios_per_sec": 0, 00:15:20.690 "rw_mbytes_per_sec": 0, 00:15:20.690 "r_mbytes_per_sec": 0, 00:15:20.690 "w_mbytes_per_sec": 0 00:15:20.690 }, 00:15:20.690 "claimed": true, 00:15:20.690 "claim_type": "exclusive_write", 00:15:20.690 "zoned": false, 00:15:20.690 "supported_io_types": { 00:15:20.690 "read": true, 00:15:20.690 "write": true, 00:15:20.690 "unmap": true, 00:15:20.690 "write_zeroes": true, 00:15:20.690 "flush": true, 00:15:20.690 "reset": true, 00:15:20.690 "compare": false, 00:15:20.690 "compare_and_write": false, 00:15:20.690 "abort": true, 00:15:20.690 "nvme_admin": false, 00:15:20.690 "nvme_io": false 00:15:20.690 }, 00:15:20.690 "memory_domains": [ 00:15:20.690 { 00:15:20.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.690 "dma_device_type": 2 00:15:20.690 } 00:15:20.690 ], 00:15:20.690 "driver_specific": {} 00:15:20.690 } 00:15:20.690 ] 00:15:20.690 16:54:09 -- common/autotest_common.sh@905 -- # return 0 00:15:20.690 16:54:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:20.690 16:54:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:20.690 16:54:09 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:20.690 16:54:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:20.690 16:54:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:20.690 16:54:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:20.690 16:54:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:20.690 16:54:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:20.690 16:54:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.690 16:54:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.690 16:54:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.690 16:54:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.690 16:54:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.690 16:54:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.948 16:54:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.948 "name": "Existed_Raid", 00:15:20.948 "uuid": "4e9fe199-49e9-419c-8a52-594dc8200569", 00:15:20.948 "strip_size_kb": 64, 00:15:20.948 "state": "online", 00:15:20.948 "raid_level": "concat", 00:15:20.948 "superblock": true, 00:15:20.948 "num_base_bdevs": 2, 00:15:20.948 "num_base_bdevs_discovered": 2, 00:15:20.948 "num_base_bdevs_operational": 2, 00:15:20.948 "base_bdevs_list": [ 00:15:20.948 { 00:15:20.948 "name": "BaseBdev1", 00:15:20.948 "uuid": "5f74af7f-9563-4858-ad76-ddeac11f23df", 00:15:20.948 "is_configured": true, 00:15:20.948 "data_offset": 2048, 00:15:20.948 "data_size": 63488 00:15:20.948 }, 00:15:20.948 { 00:15:20.948 "name": "BaseBdev2", 00:15:20.948 "uuid": "9061da14-d2e9-40af-a318-0cc49ee3ddf3", 00:15:20.948 "is_configured": true, 00:15:20.948 "data_offset": 2048, 00:15:20.948 "data_size": 63488 00:15:20.948 } 00:15:20.948 ] 00:15:20.948 }' 00:15:20.948 16:54:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.948 16:54:09 -- common/autotest_common.sh@10 -- # set +x 00:15:21.514 16:54:10 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:21.773 [2024-11-05 16:54:10.514709] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:21.773 [2024-11-05 16:54:10.514935] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.773 [2024-11-05 16:54:10.515168] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.773 16:54:10 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:21.773 16:54:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:21.773 16:54:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:21.773 16:54:10 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:21.773 16:54:10 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:21.773 16:54:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:21.773 16:54:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:21.773 16:54:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:21.773 16:54:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:21.773 16:54:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:21.773 16:54:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:21.773 16:54:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:21.773 16:54:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:21.773 16:54:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:21.773 16:54:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:21.773 16:54:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.773 16:54:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.031 16:54:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:22.031 "name": "Existed_Raid", 00:15:22.031 "uuid": "4e9fe199-49e9-419c-8a52-594dc8200569", 00:15:22.031 "strip_size_kb": 64, 00:15:22.031 "state": "offline", 00:15:22.031 "raid_level": "concat", 00:15:22.031 "superblock": true, 00:15:22.031 "num_base_bdevs": 2, 00:15:22.031 "num_base_bdevs_discovered": 1, 00:15:22.031 "num_base_bdevs_operational": 1, 00:15:22.031 "base_bdevs_list": [ 00:15:22.031 { 00:15:22.031 "name": null, 00:15:22.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.031 "is_configured": false, 00:15:22.031 "data_offset": 2048, 00:15:22.031 "data_size": 63488 00:15:22.031 }, 00:15:22.031 { 00:15:22.031 "name": "BaseBdev2", 00:15:22.031 "uuid": "9061da14-d2e9-40af-a318-0cc49ee3ddf3", 00:15:22.031 "is_configured": true, 00:15:22.031 "data_offset": 2048, 00:15:22.031 "data_size": 63488 00:15:22.031 } 00:15:22.031 ] 00:15:22.031 }' 00:15:22.031 16:54:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:22.031 16:54:10 -- common/autotest_common.sh@10 -- # set +x 00:15:22.598 16:54:11 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:22.598 16:54:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:22.598 16:54:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.598 16:54:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:22.857 16:54:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:22.857 16:54:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:22.857 16:54:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:23.116 [2024-11-05 16:54:11.953366] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:23.116 [2024-11-05 16:54:11.953606] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:15:23.373 16:54:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:23.373 16:54:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:23.373 16:54:12 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.373 16:54:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:23.373 16:54:12 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:23.373 16:54:12 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:23.373 16:54:12 -- bdev/bdev_raid.sh@287 -- # killprocess 113261 00:15:23.373 16:54:12 -- common/autotest_common.sh@936 -- # '[' -z 113261 ']' 00:15:23.373 16:54:12 -- common/autotest_common.sh@940 -- # kill -0 113261 00:15:23.373 16:54:12 -- common/autotest_common.sh@941 -- # uname 00:15:23.373 16:54:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:23.373 16:54:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113261 00:15:23.631 16:54:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:23.631 16:54:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:23.631 16:54:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113261' 00:15:23.631 killing process with pid 113261 00:15:23.631 16:54:12 -- common/autotest_common.sh@955 -- # kill 113261 00:15:23.631 [2024-11-05 16:54:12.276856] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:23.631 16:54:12 -- common/autotest_common.sh@960 -- # wait 113261 00:15:23.631 [2024-11-05 16:54:12.277067] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:24.608 00:15:24.608 real 0m10.984s 00:15:24.608 user 0m19.054s 00:15:24.608 sys 0m1.382s 00:15:24.608 16:54:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:24.608 16:54:13 -- common/autotest_common.sh@10 -- # set +x 00:15:24.608 ************************************ 00:15:24.608 END TEST raid_state_function_test_sb 00:15:24.608 ************************************ 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:24.608 16:54:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:24.608 16:54:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:24.608 16:54:13 -- common/autotest_common.sh@10 -- # set +x 00:15:24.608 ************************************ 00:15:24.608 START TEST raid_superblock_test 00:15:24.608 ************************************ 00:15:24.608 16:54:13 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 2 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@357 -- # raid_pid=113597 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:24.608 16:54:13 -- bdev/bdev_raid.sh@358 -- # waitforlisten 113597 /var/tmp/spdk-raid.sock 00:15:24.608 16:54:13 -- common/autotest_common.sh@829 -- # '[' -z 113597 ']' 00:15:24.609 16:54:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:24.609 16:54:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:24.609 16:54:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:24.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:24.609 16:54:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:24.609 16:54:13 -- common/autotest_common.sh@10 -- # set +x 00:15:24.609 [2024-11-05 16:54:13.349034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:24.609 [2024-11-05 16:54:13.349393] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113597 ] 00:15:24.867 [2024-11-05 16:54:13.507105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.867 [2024-11-05 16:54:13.685106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.126 [2024-11-05 16:54:13.865779] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.691 16:54:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:25.691 16:54:14 -- common/autotest_common.sh@862 -- # return 0 00:15:25.691 16:54:14 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:25.691 16:54:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:25.691 16:54:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:25.691 16:54:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:25.691 16:54:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:25.691 16:54:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:25.691 16:54:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:25.691 16:54:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:25.691 16:54:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:25.691 malloc1 00:15:25.691 16:54:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:25.949 [2024-11-05 16:54:14.811682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:25.949 [2024-11-05 16:54:14.811981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.949 [2024-11-05 16:54:14.812136] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:25.949 [2024-11-05 16:54:14.812281] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.949 [2024-11-05 16:54:14.814822] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.949 [2024-11-05 16:54:14.815012] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:25.949 pt1 00:15:25.949 16:54:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:25.949 16:54:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:25.949 16:54:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:25.949 16:54:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:25.949 16:54:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:25.949 16:54:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:25.949 16:54:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:25.949 16:54:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:25.949 16:54:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:26.207 malloc2 00:15:26.207 16:54:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:26.466 [2024-11-05 16:54:15.322208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:26.466 [2024-11-05 16:54:15.322470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.466 [2024-11-05 16:54:15.322555] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:26.466 [2024-11-05 16:54:15.322861] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.466 [2024-11-05 16:54:15.325437] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.466 [2024-11-05 16:54:15.325646] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:26.466 pt2 00:15:26.466 16:54:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:26.466 16:54:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:26.466 16:54:15 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:26.724 [2024-11-05 16:54:15.534363] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:26.724 [2024-11-05 16:54:15.536568] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:26.724 [2024-11-05 16:54:15.537006] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:15:26.724 [2024-11-05 16:54:15.537133] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:26.724 [2024-11-05 16:54:15.537387] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:26.724 [2024-11-05 16:54:15.537989] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:15:26.724 [2024-11-05 16:54:15.538130] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:15:26.724 [2024-11-05 16:54:15.538440] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.724 16:54:15 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:26.724 16:54:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:26.724 16:54:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:26.724 16:54:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:26.724 16:54:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:26.724 16:54:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:26.724 16:54:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:26.724 16:54:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:26.724 16:54:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:26.724 16:54:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:26.724 16:54:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.724 16:54:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.983 16:54:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:26.983 "name": "raid_bdev1", 00:15:26.983 "uuid": "b4c5dc37-8664-4a5b-9348-92f873dcffe7", 00:15:26.983 "strip_size_kb": 64, 00:15:26.983 "state": "online", 00:15:26.983 "raid_level": "concat", 00:15:26.983 "superblock": true, 00:15:26.983 "num_base_bdevs": 2, 00:15:26.983 "num_base_bdevs_discovered": 2, 00:15:26.983 "num_base_bdevs_operational": 2, 00:15:26.983 "base_bdevs_list": [ 00:15:26.983 { 00:15:26.983 "name": "pt1", 00:15:26.983 "uuid": "4ac2b501-01e0-54d9-896a-64cf2893b734", 00:15:26.983 "is_configured": true, 00:15:26.983 "data_offset": 2048, 00:15:26.983 "data_size": 63488 00:15:26.983 }, 00:15:26.983 { 00:15:26.983 "name": "pt2", 00:15:26.983 "uuid": "e1ca3b81-14c5-5831-820e-ad8755b72c3a", 00:15:26.983 "is_configured": true, 00:15:26.983 "data_offset": 2048, 00:15:26.983 "data_size": 63488 00:15:26.983 } 00:15:26.983 ] 00:15:26.983 }' 00:15:26.983 16:54:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:26.983 16:54:15 -- common/autotest_common.sh@10 -- # set +x 00:15:27.551 16:54:16 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:27.551 16:54:16 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:27.810 [2024-11-05 16:54:16.674951] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.810 16:54:16 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=b4c5dc37-8664-4a5b-9348-92f873dcffe7 00:15:27.810 16:54:16 -- bdev/bdev_raid.sh@380 -- # '[' -z b4c5dc37-8664-4a5b-9348-92f873dcffe7 ']' 00:15:27.810 16:54:16 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:28.069 [2024-11-05 16:54:16.874707] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:28.069 [2024-11-05 16:54:16.874974] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.069 [2024-11-05 16:54:16.875216] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.069 [2024-11-05 16:54:16.875416] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.069 [2024-11-05 16:54:16.875525] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:15:28.069 16:54:16 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.069 16:54:16 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:28.328 16:54:17 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:28.328 16:54:17 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:28.328 16:54:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:28.328 16:54:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:28.587 16:54:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:28.587 16:54:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:28.845 16:54:17 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:28.845 16:54:17 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:29.103 16:54:17 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:29.103 16:54:17 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:29.103 16:54:17 -- common/autotest_common.sh@650 -- # local es=0 00:15:29.103 16:54:17 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:29.103 16:54:17 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.103 16:54:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.103 16:54:17 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.103 16:54:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.103 16:54:17 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.103 16:54:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.103 16:54:17 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.104 16:54:17 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:29.104 16:54:17 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:29.104 [2024-11-05 16:54:17.987027] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:29.104 [2024-11-05 16:54:17.989279] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:29.104 [2024-11-05 16:54:17.989500] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:29.104 [2024-11-05 16:54:17.989718] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:29.104 [2024-11-05 16:54:17.989865] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.104 [2024-11-05 16:54:17.989909] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:15:29.104 request: 00:15:29.104 { 00:15:29.104 "name": "raid_bdev1", 00:15:29.104 "raid_level": "concat", 00:15:29.104 "base_bdevs": [ 00:15:29.104 "malloc1", 00:15:29.104 "malloc2" 00:15:29.104 ], 00:15:29.104 "superblock": false, 00:15:29.104 "strip_size_kb": 64, 00:15:29.104 "method": "bdev_raid_create", 00:15:29.104 "req_id": 1 00:15:29.104 } 00:15:29.104 Got JSON-RPC error response 00:15:29.104 response: 00:15:29.104 { 00:15:29.104 "code": -17, 00:15:29.104 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:29.104 } 00:15:29.363 16:54:18 -- common/autotest_common.sh@653 -- # es=1 00:15:29.363 16:54:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.363 16:54:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.363 16:54:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.363 16:54:18 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.363 16:54:18 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:29.363 16:54:18 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:29.363 16:54:18 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:29.363 16:54:18 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:29.622 [2024-11-05 16:54:18.435072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:29.622 [2024-11-05 16:54:18.435437] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.622 [2024-11-05 16:54:18.435591] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:29.622 [2024-11-05 16:54:18.435722] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.622 [2024-11-05 16:54:18.438109] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.622 [2024-11-05 16:54:18.438287] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:29.622 [2024-11-05 16:54:18.438555] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:29.622 [2024-11-05 16:54:18.438724] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:29.622 pt1 00:15:29.622 16:54:18 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:29.622 16:54:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:29.622 16:54:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:29.622 16:54:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:29.622 16:54:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:29.622 16:54:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:29.622 16:54:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:29.622 16:54:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:29.622 16:54:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:29.622 16:54:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:29.622 16:54:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.622 16:54:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.881 16:54:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:29.881 "name": "raid_bdev1", 00:15:29.881 "uuid": "b4c5dc37-8664-4a5b-9348-92f873dcffe7", 00:15:29.881 "strip_size_kb": 64, 00:15:29.881 "state": "configuring", 00:15:29.881 "raid_level": "concat", 00:15:29.881 "superblock": true, 00:15:29.881 "num_base_bdevs": 2, 00:15:29.881 "num_base_bdevs_discovered": 1, 00:15:29.881 "num_base_bdevs_operational": 2, 00:15:29.881 "base_bdevs_list": [ 00:15:29.881 { 00:15:29.881 "name": "pt1", 00:15:29.881 "uuid": "4ac2b501-01e0-54d9-896a-64cf2893b734", 00:15:29.881 "is_configured": true, 00:15:29.881 "data_offset": 2048, 00:15:29.881 "data_size": 63488 00:15:29.881 }, 00:15:29.881 { 00:15:29.881 "name": null, 00:15:29.881 "uuid": "e1ca3b81-14c5-5831-820e-ad8755b72c3a", 00:15:29.881 "is_configured": false, 00:15:29.881 "data_offset": 2048, 00:15:29.881 "data_size": 63488 00:15:29.881 } 00:15:29.881 ] 00:15:29.881 }' 00:15:29.881 16:54:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:29.881 16:54:18 -- common/autotest_common.sh@10 -- # set +x 00:15:30.449 16:54:19 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:30.449 16:54:19 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:30.449 16:54:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:30.449 16:54:19 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:30.713 [2024-11-05 16:54:19.479416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:30.713 [2024-11-05 16:54:19.479749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.713 [2024-11-05 16:54:19.479901] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:30.713 [2024-11-05 16:54:19.480048] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.713 [2024-11-05 16:54:19.480626] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.714 [2024-11-05 16:54:19.480811] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:30.714 [2024-11-05 16:54:19.481051] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:30.714 [2024-11-05 16:54:19.481208] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:30.714 [2024-11-05 16:54:19.481448] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:15:30.714 [2024-11-05 16:54:19.481568] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:30.714 [2024-11-05 16:54:19.481742] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:30.714 [2024-11-05 16:54:19.482127] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:15:30.714 [2024-11-05 16:54:19.482283] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:15:30.714 [2024-11-05 16:54:19.482540] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.714 pt2 00:15:30.714 16:54:19 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:30.714 16:54:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:30.714 16:54:19 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:30.714 16:54:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:30.714 16:54:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:30.714 16:54:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:30.714 16:54:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:30.714 16:54:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:30.714 16:54:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:30.714 16:54:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:30.714 16:54:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:30.714 16:54:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:30.714 16:54:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.714 16:54:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.980 16:54:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.980 "name": "raid_bdev1", 00:15:30.980 "uuid": "b4c5dc37-8664-4a5b-9348-92f873dcffe7", 00:15:30.980 "strip_size_kb": 64, 00:15:30.980 "state": "online", 00:15:30.980 "raid_level": "concat", 00:15:30.980 "superblock": true, 00:15:30.980 "num_base_bdevs": 2, 00:15:30.980 "num_base_bdevs_discovered": 2, 00:15:30.980 "num_base_bdevs_operational": 2, 00:15:30.980 "base_bdevs_list": [ 00:15:30.980 { 00:15:30.980 "name": "pt1", 00:15:30.980 "uuid": "4ac2b501-01e0-54d9-896a-64cf2893b734", 00:15:30.980 "is_configured": true, 00:15:30.980 "data_offset": 2048, 00:15:30.980 "data_size": 63488 00:15:30.980 }, 00:15:30.980 { 00:15:30.980 "name": "pt2", 00:15:30.980 "uuid": "e1ca3b81-14c5-5831-820e-ad8755b72c3a", 00:15:30.980 "is_configured": true, 00:15:30.980 "data_offset": 2048, 00:15:30.980 "data_size": 63488 00:15:30.980 } 00:15:30.980 ] 00:15:30.980 }' 00:15:30.980 16:54:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.980 16:54:19 -- common/autotest_common.sh@10 -- # set +x 00:15:31.548 16:54:20 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:31.548 16:54:20 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:31.806 [2024-11-05 16:54:20.607895] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.806 16:54:20 -- bdev/bdev_raid.sh@430 -- # '[' b4c5dc37-8664-4a5b-9348-92f873dcffe7 '!=' b4c5dc37-8664-4a5b-9348-92f873dcffe7 ']' 00:15:31.806 16:54:20 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:15:31.806 16:54:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:31.806 16:54:20 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:31.806 16:54:20 -- bdev/bdev_raid.sh@511 -- # killprocess 113597 00:15:31.806 16:54:20 -- common/autotest_common.sh@936 -- # '[' -z 113597 ']' 00:15:31.807 16:54:20 -- common/autotest_common.sh@940 -- # kill -0 113597 00:15:31.807 16:54:20 -- common/autotest_common.sh@941 -- # uname 00:15:31.807 16:54:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:31.807 16:54:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113597 00:15:31.807 16:54:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:31.807 16:54:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:31.807 16:54:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113597' 00:15:31.807 killing process with pid 113597 00:15:31.807 16:54:20 -- common/autotest_common.sh@955 -- # kill 113597 00:15:31.807 [2024-11-05 16:54:20.653897] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.807 16:54:20 -- common/autotest_common.sh@960 -- # wait 113597 00:15:31.807 [2024-11-05 16:54:20.654090] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.807 [2024-11-05 16:54:20.654333] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.807 [2024-11-05 16:54:20.654427] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:15:32.065 [2024-11-05 16:54:20.787233] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:33.002 ************************************ 00:15:33.002 END TEST raid_superblock_test 00:15:33.002 ************************************ 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:33.002 00:15:33.002 real 0m8.436s 00:15:33.002 user 0m14.431s 00:15:33.002 sys 0m1.041s 00:15:33.002 16:54:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:33.002 16:54:21 -- common/autotest_common.sh@10 -- # set +x 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:33.002 16:54:21 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:33.002 16:54:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:33.002 16:54:21 -- common/autotest_common.sh@10 -- # set +x 00:15:33.002 ************************************ 00:15:33.002 START TEST raid_state_function_test 00:15:33.002 ************************************ 00:15:33.002 16:54:21 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 false 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@226 -- # raid_pid=113849 00:15:33.002 Process raid pid: 113849 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 113849' 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:33.002 16:54:21 -- bdev/bdev_raid.sh@228 -- # waitforlisten 113849 /var/tmp/spdk-raid.sock 00:15:33.002 16:54:21 -- common/autotest_common.sh@829 -- # '[' -z 113849 ']' 00:15:33.002 16:54:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:33.002 16:54:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:33.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:33.002 16:54:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:33.002 16:54:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:33.002 16:54:21 -- common/autotest_common.sh@10 -- # set +x 00:15:33.002 [2024-11-05 16:54:21.852673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:33.002 [2024-11-05 16:54:21.852932] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.261 [2024-11-05 16:54:22.019450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.520 [2024-11-05 16:54:22.188680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.520 [2024-11-05 16:54:22.373099] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.087 16:54:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.087 16:54:22 -- common/autotest_common.sh@862 -- # return 0 00:15:34.087 16:54:22 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:34.346 [2024-11-05 16:54:23.008840] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.346 [2024-11-05 16:54:23.008932] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.346 [2024-11-05 16:54:23.008960] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.346 [2024-11-05 16:54:23.008977] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.346 16:54:23 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:34.346 16:54:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:34.346 16:54:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:34.346 16:54:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:34.346 16:54:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:34.346 16:54:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:34.346 16:54:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:34.346 16:54:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:34.346 16:54:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:34.346 16:54:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:34.346 16:54:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.346 16:54:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.605 16:54:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:34.605 "name": "Existed_Raid", 00:15:34.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.605 "strip_size_kb": 0, 00:15:34.605 "state": "configuring", 00:15:34.605 "raid_level": "raid1", 00:15:34.605 "superblock": false, 00:15:34.605 "num_base_bdevs": 2, 00:15:34.605 "num_base_bdevs_discovered": 0, 00:15:34.605 "num_base_bdevs_operational": 2, 00:15:34.605 "base_bdevs_list": [ 00:15:34.605 { 00:15:34.605 "name": "BaseBdev1", 00:15:34.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.605 "is_configured": false, 00:15:34.605 "data_offset": 0, 00:15:34.605 "data_size": 0 00:15:34.605 }, 00:15:34.605 { 00:15:34.605 "name": "BaseBdev2", 00:15:34.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.605 "is_configured": false, 00:15:34.605 "data_offset": 0, 00:15:34.605 "data_size": 0 00:15:34.605 } 00:15:34.605 ] 00:15:34.605 }' 00:15:34.605 16:54:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:34.605 16:54:23 -- common/autotest_common.sh@10 -- # set +x 00:15:35.172 16:54:23 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:35.431 [2024-11-05 16:54:24.121523] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:35.431 [2024-11-05 16:54:24.121583] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:35.431 16:54:24 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:35.690 [2024-11-05 16:54:24.369562] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:35.690 [2024-11-05 16:54:24.369655] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:35.690 [2024-11-05 16:54:24.369682] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:35.690 [2024-11-05 16:54:24.369703] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:35.690 16:54:24 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:35.949 [2024-11-05 16:54:24.602217] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.949 BaseBdev1 00:15:35.949 16:54:24 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:35.949 16:54:24 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:35.949 16:54:24 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:35.949 16:54:24 -- common/autotest_common.sh@899 -- # local i 00:15:35.949 16:54:24 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:35.949 16:54:24 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:35.949 16:54:24 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:36.207 16:54:24 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:36.466 [ 00:15:36.466 { 00:15:36.466 "name": "BaseBdev1", 00:15:36.466 "aliases": [ 00:15:36.466 "462fe997-d307-4faf-80e6-1a771ac47cf1" 00:15:36.466 ], 00:15:36.466 "product_name": "Malloc disk", 00:15:36.466 "block_size": 512, 00:15:36.466 "num_blocks": 65536, 00:15:36.466 "uuid": "462fe997-d307-4faf-80e6-1a771ac47cf1", 00:15:36.466 "assigned_rate_limits": { 00:15:36.466 "rw_ios_per_sec": 0, 00:15:36.466 "rw_mbytes_per_sec": 0, 00:15:36.466 "r_mbytes_per_sec": 0, 00:15:36.466 "w_mbytes_per_sec": 0 00:15:36.466 }, 00:15:36.466 "claimed": true, 00:15:36.466 "claim_type": "exclusive_write", 00:15:36.466 "zoned": false, 00:15:36.466 "supported_io_types": { 00:15:36.466 "read": true, 00:15:36.466 "write": true, 00:15:36.466 "unmap": true, 00:15:36.466 "write_zeroes": true, 00:15:36.466 "flush": true, 00:15:36.466 "reset": true, 00:15:36.466 "compare": false, 00:15:36.466 "compare_and_write": false, 00:15:36.466 "abort": true, 00:15:36.466 "nvme_admin": false, 00:15:36.466 "nvme_io": false 00:15:36.466 }, 00:15:36.466 "memory_domains": [ 00:15:36.466 { 00:15:36.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.466 "dma_device_type": 2 00:15:36.466 } 00:15:36.466 ], 00:15:36.466 "driver_specific": {} 00:15:36.466 } 00:15:36.466 ] 00:15:36.466 16:54:25 -- common/autotest_common.sh@905 -- # return 0 00:15:36.466 16:54:25 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:36.466 16:54:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:36.466 16:54:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:36.466 16:54:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:36.466 16:54:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:36.467 16:54:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:36.467 16:54:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:36.467 16:54:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:36.467 16:54:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:36.467 16:54:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:36.467 16:54:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.467 16:54:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.467 16:54:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:36.467 "name": "Existed_Raid", 00:15:36.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.467 "strip_size_kb": 0, 00:15:36.467 "state": "configuring", 00:15:36.467 "raid_level": "raid1", 00:15:36.467 "superblock": false, 00:15:36.467 "num_base_bdevs": 2, 00:15:36.467 "num_base_bdevs_discovered": 1, 00:15:36.467 "num_base_bdevs_operational": 2, 00:15:36.467 "base_bdevs_list": [ 00:15:36.467 { 00:15:36.467 "name": "BaseBdev1", 00:15:36.467 "uuid": "462fe997-d307-4faf-80e6-1a771ac47cf1", 00:15:36.467 "is_configured": true, 00:15:36.467 "data_offset": 0, 00:15:36.467 "data_size": 65536 00:15:36.467 }, 00:15:36.467 { 00:15:36.467 "name": "BaseBdev2", 00:15:36.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.467 "is_configured": false, 00:15:36.467 "data_offset": 0, 00:15:36.467 "data_size": 0 00:15:36.467 } 00:15:36.467 ] 00:15:36.467 }' 00:15:36.467 16:54:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:36.467 16:54:25 -- common/autotest_common.sh@10 -- # set +x 00:15:37.034 16:54:25 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:37.309 [2024-11-05 16:54:26.067366] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.309 [2024-11-05 16:54:26.067434] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:37.309 16:54:26 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:37.309 16:54:26 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:37.578 [2024-11-05 16:54:26.247456] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.578 [2024-11-05 16:54:26.249387] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.578 [2024-11-05 16:54:26.249445] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.578 16:54:26 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:37.578 16:54:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:37.578 16:54:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:37.578 16:54:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:37.578 16:54:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:37.578 16:54:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:37.578 16:54:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:37.578 16:54:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:37.578 16:54:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:37.578 16:54:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:37.578 16:54:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:37.578 16:54:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:37.578 16:54:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.578 16:54:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.578 16:54:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:37.578 "name": "Existed_Raid", 00:15:37.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.578 "strip_size_kb": 0, 00:15:37.578 "state": "configuring", 00:15:37.578 "raid_level": "raid1", 00:15:37.578 "superblock": false, 00:15:37.578 "num_base_bdevs": 2, 00:15:37.578 "num_base_bdevs_discovered": 1, 00:15:37.578 "num_base_bdevs_operational": 2, 00:15:37.578 "base_bdevs_list": [ 00:15:37.578 { 00:15:37.578 "name": "BaseBdev1", 00:15:37.578 "uuid": "462fe997-d307-4faf-80e6-1a771ac47cf1", 00:15:37.578 "is_configured": true, 00:15:37.578 "data_offset": 0, 00:15:37.578 "data_size": 65536 00:15:37.578 }, 00:15:37.578 { 00:15:37.578 "name": "BaseBdev2", 00:15:37.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.578 "is_configured": false, 00:15:37.578 "data_offset": 0, 00:15:37.578 "data_size": 0 00:15:37.578 } 00:15:37.578 ] 00:15:37.578 }' 00:15:37.578 16:54:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:37.578 16:54:26 -- common/autotest_common.sh@10 -- # set +x 00:15:38.514 16:54:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:38.514 [2024-11-05 16:54:27.317908] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.514 [2024-11-05 16:54:27.317960] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:15:38.514 [2024-11-05 16:54:27.317969] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:38.514 [2024-11-05 16:54:27.318074] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:38.514 [2024-11-05 16:54:27.318433] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:15:38.514 [2024-11-05 16:54:27.318448] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:15:38.514 [2024-11-05 16:54:27.318702] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.514 BaseBdev2 00:15:38.514 16:54:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:38.514 16:54:27 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:38.514 16:54:27 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:38.514 16:54:27 -- common/autotest_common.sh@899 -- # local i 00:15:38.514 16:54:27 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:38.514 16:54:27 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:38.514 16:54:27 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:38.773 16:54:27 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:39.032 [ 00:15:39.032 { 00:15:39.032 "name": "BaseBdev2", 00:15:39.032 "aliases": [ 00:15:39.032 "26d69fe3-b485-4fa7-9193-1265966579a1" 00:15:39.032 ], 00:15:39.032 "product_name": "Malloc disk", 00:15:39.032 "block_size": 512, 00:15:39.032 "num_blocks": 65536, 00:15:39.032 "uuid": "26d69fe3-b485-4fa7-9193-1265966579a1", 00:15:39.032 "assigned_rate_limits": { 00:15:39.032 "rw_ios_per_sec": 0, 00:15:39.032 "rw_mbytes_per_sec": 0, 00:15:39.032 "r_mbytes_per_sec": 0, 00:15:39.032 "w_mbytes_per_sec": 0 00:15:39.032 }, 00:15:39.032 "claimed": true, 00:15:39.032 "claim_type": "exclusive_write", 00:15:39.032 "zoned": false, 00:15:39.032 "supported_io_types": { 00:15:39.032 "read": true, 00:15:39.032 "write": true, 00:15:39.032 "unmap": true, 00:15:39.032 "write_zeroes": true, 00:15:39.032 "flush": true, 00:15:39.032 "reset": true, 00:15:39.032 "compare": false, 00:15:39.032 "compare_and_write": false, 00:15:39.032 "abort": true, 00:15:39.032 "nvme_admin": false, 00:15:39.032 "nvme_io": false 00:15:39.032 }, 00:15:39.032 "memory_domains": [ 00:15:39.032 { 00:15:39.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.032 "dma_device_type": 2 00:15:39.032 } 00:15:39.032 ], 00:15:39.032 "driver_specific": {} 00:15:39.032 } 00:15:39.032 ] 00:15:39.032 16:54:27 -- common/autotest_common.sh@905 -- # return 0 00:15:39.032 16:54:27 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:39.032 16:54:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:39.032 16:54:27 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:39.032 16:54:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:39.032 16:54:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:39.032 16:54:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:39.032 16:54:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:39.032 16:54:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:39.032 16:54:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:39.032 16:54:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:39.032 16:54:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:39.032 16:54:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:39.032 16:54:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.032 16:54:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.032 16:54:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:39.032 "name": "Existed_Raid", 00:15:39.032 "uuid": "37cae4c5-3f94-4c8d-bb96-3a2c675911f1", 00:15:39.032 "strip_size_kb": 0, 00:15:39.032 "state": "online", 00:15:39.032 "raid_level": "raid1", 00:15:39.032 "superblock": false, 00:15:39.032 "num_base_bdevs": 2, 00:15:39.032 "num_base_bdevs_discovered": 2, 00:15:39.032 "num_base_bdevs_operational": 2, 00:15:39.032 "base_bdevs_list": [ 00:15:39.032 { 00:15:39.032 "name": "BaseBdev1", 00:15:39.032 "uuid": "462fe997-d307-4faf-80e6-1a771ac47cf1", 00:15:39.032 "is_configured": true, 00:15:39.032 "data_offset": 0, 00:15:39.032 "data_size": 65536 00:15:39.032 }, 00:15:39.032 { 00:15:39.032 "name": "BaseBdev2", 00:15:39.032 "uuid": "26d69fe3-b485-4fa7-9193-1265966579a1", 00:15:39.032 "is_configured": true, 00:15:39.032 "data_offset": 0, 00:15:39.032 "data_size": 65536 00:15:39.032 } 00:15:39.032 ] 00:15:39.032 }' 00:15:39.032 16:54:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:39.032 16:54:27 -- common/autotest_common.sh@10 -- # set +x 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:39.969 [2024-11-05 16:54:28.746322] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.969 16:54:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.227 16:54:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:40.227 "name": "Existed_Raid", 00:15:40.227 "uuid": "37cae4c5-3f94-4c8d-bb96-3a2c675911f1", 00:15:40.227 "strip_size_kb": 0, 00:15:40.227 "state": "online", 00:15:40.227 "raid_level": "raid1", 00:15:40.227 "superblock": false, 00:15:40.227 "num_base_bdevs": 2, 00:15:40.227 "num_base_bdevs_discovered": 1, 00:15:40.227 "num_base_bdevs_operational": 1, 00:15:40.228 "base_bdevs_list": [ 00:15:40.228 { 00:15:40.228 "name": null, 00:15:40.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.228 "is_configured": false, 00:15:40.228 "data_offset": 0, 00:15:40.228 "data_size": 65536 00:15:40.228 }, 00:15:40.228 { 00:15:40.228 "name": "BaseBdev2", 00:15:40.228 "uuid": "26d69fe3-b485-4fa7-9193-1265966579a1", 00:15:40.228 "is_configured": true, 00:15:40.228 "data_offset": 0, 00:15:40.228 "data_size": 65536 00:15:40.228 } 00:15:40.228 ] 00:15:40.228 }' 00:15:40.228 16:54:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:40.228 16:54:29 -- common/autotest_common.sh@10 -- # set +x 00:15:41.163 16:54:29 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:41.163 16:54:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:41.163 16:54:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.163 16:54:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:41.163 16:54:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:41.163 16:54:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:41.163 16:54:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:41.422 [2024-11-05 16:54:30.175806] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:41.422 [2024-11-05 16:54:30.175842] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.422 [2024-11-05 16:54:30.175944] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.422 [2024-11-05 16:54:30.247523] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.422 [2024-11-05 16:54:30.247575] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:15:41.422 16:54:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:41.422 16:54:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:41.422 16:54:30 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:41.422 16:54:30 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.681 16:54:30 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:41.681 16:54:30 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:41.681 16:54:30 -- bdev/bdev_raid.sh@287 -- # killprocess 113849 00:15:41.681 16:54:30 -- common/autotest_common.sh@936 -- # '[' -z 113849 ']' 00:15:41.681 16:54:30 -- common/autotest_common.sh@940 -- # kill -0 113849 00:15:41.681 16:54:30 -- common/autotest_common.sh@941 -- # uname 00:15:41.681 16:54:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:41.681 16:54:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113849 00:15:41.681 16:54:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:41.681 16:54:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:41.681 16:54:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113849' 00:15:41.681 killing process with pid 113849 00:15:41.681 16:54:30 -- common/autotest_common.sh@955 -- # kill 113849 00:15:41.681 16:54:30 -- common/autotest_common.sh@960 -- # wait 113849 00:15:41.681 [2024-11-05 16:54:30.491767] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:41.681 [2024-11-05 16:54:30.492049] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:42.617 ************************************ 00:15:42.617 END TEST raid_state_function_test 00:15:42.617 ************************************ 00:15:42.617 16:54:31 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:42.617 00:15:42.617 real 0m9.640s 00:15:42.617 user 0m16.755s 00:15:42.617 sys 0m1.207s 00:15:42.617 16:54:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:42.617 16:54:31 -- common/autotest_common.sh@10 -- # set +x 00:15:42.617 16:54:31 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:15:42.617 16:54:31 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:42.617 16:54:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:42.617 16:54:31 -- common/autotest_common.sh@10 -- # set +x 00:15:42.618 ************************************ 00:15:42.618 START TEST raid_state_function_test_sb 00:15:42.618 ************************************ 00:15:42.618 16:54:31 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 true 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@226 -- # raid_pid=114163 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 114163' 00:15:42.618 Process raid pid: 114163 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:42.618 16:54:31 -- bdev/bdev_raid.sh@228 -- # waitforlisten 114163 /var/tmp/spdk-raid.sock 00:15:42.618 16:54:31 -- common/autotest_common.sh@829 -- # '[' -z 114163 ']' 00:15:42.618 16:54:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:42.618 16:54:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.618 16:54:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:42.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:42.618 16:54:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.618 16:54:31 -- common/autotest_common.sh@10 -- # set +x 00:15:42.876 [2024-11-05 16:54:31.563205] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:42.876 [2024-11-05 16:54:31.563444] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.876 [2024-11-05 16:54:31.737235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.135 [2024-11-05 16:54:31.950706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.394 [2024-11-05 16:54:32.125718] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.653 16:54:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:43.653 16:54:32 -- common/autotest_common.sh@862 -- # return 0 00:15:43.653 16:54:32 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:43.913 [2024-11-05 16:54:32.727392] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:43.913 [2024-11-05 16:54:32.727471] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:43.913 [2024-11-05 16:54:32.727484] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.913 [2024-11-05 16:54:32.727502] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.913 16:54:32 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:43.914 16:54:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:43.914 16:54:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:43.914 16:54:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:43.914 16:54:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:43.914 16:54:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:43.914 16:54:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:43.914 16:54:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:43.914 16:54:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:43.914 16:54:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:43.914 16:54:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.914 16:54:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.174 16:54:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:44.174 "name": "Existed_Raid", 00:15:44.174 "uuid": "3e9fb7bd-6bf1-4f39-8623-d1cac3fb15b8", 00:15:44.175 "strip_size_kb": 0, 00:15:44.175 "state": "configuring", 00:15:44.175 "raid_level": "raid1", 00:15:44.175 "superblock": true, 00:15:44.175 "num_base_bdevs": 2, 00:15:44.175 "num_base_bdevs_discovered": 0, 00:15:44.175 "num_base_bdevs_operational": 2, 00:15:44.175 "base_bdevs_list": [ 00:15:44.175 { 00:15:44.175 "name": "BaseBdev1", 00:15:44.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.175 "is_configured": false, 00:15:44.175 "data_offset": 0, 00:15:44.175 "data_size": 0 00:15:44.175 }, 00:15:44.175 { 00:15:44.175 "name": "BaseBdev2", 00:15:44.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.175 "is_configured": false, 00:15:44.175 "data_offset": 0, 00:15:44.175 "data_size": 0 00:15:44.175 } 00:15:44.175 ] 00:15:44.175 }' 00:15:44.175 16:54:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:44.175 16:54:32 -- common/autotest_common.sh@10 -- # set +x 00:15:44.742 16:54:33 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:45.002 [2024-11-05 16:54:33.851692] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:45.002 [2024-11-05 16:54:33.851753] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:45.002 16:54:33 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:45.262 [2024-11-05 16:54:34.107775] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.262 [2024-11-05 16:54:34.107886] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.262 [2024-11-05 16:54:34.107908] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.262 [2024-11-05 16:54:34.107931] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.262 16:54:34 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:45.521 [2024-11-05 16:54:34.340182] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.521 BaseBdev1 00:15:45.521 16:54:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:45.521 16:54:34 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:45.521 16:54:34 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:45.521 16:54:34 -- common/autotest_common.sh@899 -- # local i 00:15:45.521 16:54:34 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:45.521 16:54:34 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:45.521 16:54:34 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:45.780 16:54:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:46.040 [ 00:15:46.040 { 00:15:46.040 "name": "BaseBdev1", 00:15:46.041 "aliases": [ 00:15:46.041 "5f24bebf-0042-4147-8fd9-5dd98a295600" 00:15:46.041 ], 00:15:46.041 "product_name": "Malloc disk", 00:15:46.041 "block_size": 512, 00:15:46.041 "num_blocks": 65536, 00:15:46.041 "uuid": "5f24bebf-0042-4147-8fd9-5dd98a295600", 00:15:46.041 "assigned_rate_limits": { 00:15:46.041 "rw_ios_per_sec": 0, 00:15:46.041 "rw_mbytes_per_sec": 0, 00:15:46.041 "r_mbytes_per_sec": 0, 00:15:46.041 "w_mbytes_per_sec": 0 00:15:46.041 }, 00:15:46.041 "claimed": true, 00:15:46.041 "claim_type": "exclusive_write", 00:15:46.041 "zoned": false, 00:15:46.041 "supported_io_types": { 00:15:46.041 "read": true, 00:15:46.041 "write": true, 00:15:46.041 "unmap": true, 00:15:46.041 "write_zeroes": true, 00:15:46.041 "flush": true, 00:15:46.041 "reset": true, 00:15:46.041 "compare": false, 00:15:46.041 "compare_and_write": false, 00:15:46.041 "abort": true, 00:15:46.041 "nvme_admin": false, 00:15:46.041 "nvme_io": false 00:15:46.041 }, 00:15:46.041 "memory_domains": [ 00:15:46.041 { 00:15:46.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.041 "dma_device_type": 2 00:15:46.041 } 00:15:46.041 ], 00:15:46.041 "driver_specific": {} 00:15:46.041 } 00:15:46.041 ] 00:15:46.041 16:54:34 -- common/autotest_common.sh@905 -- # return 0 00:15:46.041 16:54:34 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:46.041 16:54:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:46.041 16:54:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:46.041 16:54:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:46.041 16:54:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:46.041 16:54:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:46.041 16:54:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:46.041 16:54:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:46.041 16:54:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:46.041 16:54:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:46.041 16:54:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.041 16:54:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.300 16:54:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:46.301 "name": "Existed_Raid", 00:15:46.301 "uuid": "90f322b0-6ebb-4a35-9640-df4a8857c8a4", 00:15:46.301 "strip_size_kb": 0, 00:15:46.301 "state": "configuring", 00:15:46.301 "raid_level": "raid1", 00:15:46.301 "superblock": true, 00:15:46.301 "num_base_bdevs": 2, 00:15:46.301 "num_base_bdevs_discovered": 1, 00:15:46.301 "num_base_bdevs_operational": 2, 00:15:46.301 "base_bdevs_list": [ 00:15:46.301 { 00:15:46.301 "name": "BaseBdev1", 00:15:46.301 "uuid": "5f24bebf-0042-4147-8fd9-5dd98a295600", 00:15:46.301 "is_configured": true, 00:15:46.301 "data_offset": 2048, 00:15:46.301 "data_size": 63488 00:15:46.301 }, 00:15:46.301 { 00:15:46.301 "name": "BaseBdev2", 00:15:46.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.301 "is_configured": false, 00:15:46.301 "data_offset": 0, 00:15:46.301 "data_size": 0 00:15:46.301 } 00:15:46.301 ] 00:15:46.301 }' 00:15:46.301 16:54:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:46.301 16:54:34 -- common/autotest_common.sh@10 -- # set +x 00:15:46.869 16:54:35 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:47.127 [2024-11-05 16:54:35.776547] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:47.127 [2024-11-05 16:54:35.776619] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:47.127 16:54:35 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:47.127 16:54:35 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:47.387 16:54:36 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:47.646 BaseBdev1 00:15:47.646 16:54:36 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:47.646 16:54:36 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:47.646 16:54:36 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:47.646 16:54:36 -- common/autotest_common.sh@899 -- # local i 00:15:47.646 16:54:36 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:47.646 16:54:36 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:47.646 16:54:36 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:47.904 16:54:36 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:47.904 [ 00:15:47.904 { 00:15:47.904 "name": "BaseBdev1", 00:15:47.904 "aliases": [ 00:15:47.904 "4442f063-bdf4-4d42-9f87-40c9223cfce7" 00:15:47.905 ], 00:15:47.905 "product_name": "Malloc disk", 00:15:47.905 "block_size": 512, 00:15:47.905 "num_blocks": 65536, 00:15:47.905 "uuid": "4442f063-bdf4-4d42-9f87-40c9223cfce7", 00:15:47.905 "assigned_rate_limits": { 00:15:47.905 "rw_ios_per_sec": 0, 00:15:47.905 "rw_mbytes_per_sec": 0, 00:15:47.905 "r_mbytes_per_sec": 0, 00:15:47.905 "w_mbytes_per_sec": 0 00:15:47.905 }, 00:15:47.905 "claimed": false, 00:15:47.905 "zoned": false, 00:15:47.905 "supported_io_types": { 00:15:47.905 "read": true, 00:15:47.905 "write": true, 00:15:47.905 "unmap": true, 00:15:47.905 "write_zeroes": true, 00:15:47.905 "flush": true, 00:15:47.905 "reset": true, 00:15:47.905 "compare": false, 00:15:47.905 "compare_and_write": false, 00:15:47.905 "abort": true, 00:15:47.905 "nvme_admin": false, 00:15:47.905 "nvme_io": false 00:15:47.905 }, 00:15:47.905 "memory_domains": [ 00:15:47.905 { 00:15:47.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.905 "dma_device_type": 2 00:15:47.905 } 00:15:47.905 ], 00:15:47.905 "driver_specific": {} 00:15:47.905 } 00:15:47.905 ] 00:15:47.905 16:54:36 -- common/autotest_common.sh@905 -- # return 0 00:15:47.905 16:54:36 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:48.164 [2024-11-05 16:54:36.936906] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.164 [2024-11-05 16:54:36.939117] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:48.164 [2024-11-05 16:54:36.939187] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:48.164 16:54:36 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:48.164 16:54:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:48.164 16:54:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:48.164 16:54:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:48.164 16:54:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:48.164 16:54:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:48.164 16:54:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:48.164 16:54:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:48.164 16:54:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:48.164 16:54:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:48.164 16:54:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:48.164 16:54:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:48.164 16:54:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.164 16:54:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.423 16:54:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:48.423 "name": "Existed_Raid", 00:15:48.423 "uuid": "c22e8b54-9d0f-4cf7-bff6-a5132d0042bd", 00:15:48.423 "strip_size_kb": 0, 00:15:48.423 "state": "configuring", 00:15:48.423 "raid_level": "raid1", 00:15:48.423 "superblock": true, 00:15:48.423 "num_base_bdevs": 2, 00:15:48.423 "num_base_bdevs_discovered": 1, 00:15:48.423 "num_base_bdevs_operational": 2, 00:15:48.423 "base_bdevs_list": [ 00:15:48.423 { 00:15:48.423 "name": "BaseBdev1", 00:15:48.423 "uuid": "4442f063-bdf4-4d42-9f87-40c9223cfce7", 00:15:48.423 "is_configured": true, 00:15:48.423 "data_offset": 2048, 00:15:48.423 "data_size": 63488 00:15:48.423 }, 00:15:48.423 { 00:15:48.423 "name": "BaseBdev2", 00:15:48.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.423 "is_configured": false, 00:15:48.423 "data_offset": 0, 00:15:48.423 "data_size": 0 00:15:48.423 } 00:15:48.423 ] 00:15:48.423 }' 00:15:48.423 16:54:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:48.423 16:54:37 -- common/autotest_common.sh@10 -- # set +x 00:15:48.991 16:54:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:49.249 [2024-11-05 16:54:38.102182] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.249 [2024-11-05 16:54:38.102424] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:15:49.249 [2024-11-05 16:54:38.102439] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:49.250 [2024-11-05 16:54:38.102556] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:49.250 [2024-11-05 16:54:38.102919] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:15:49.250 [2024-11-05 16:54:38.102940] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:15:49.250 BaseBdev2 00:15:49.250 [2024-11-05 16:54:38.103076] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.250 16:54:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:49.250 16:54:38 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:49.250 16:54:38 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:49.250 16:54:38 -- common/autotest_common.sh@899 -- # local i 00:15:49.250 16:54:38 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:49.250 16:54:38 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:49.250 16:54:38 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:49.508 16:54:38 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:49.767 [ 00:15:49.767 { 00:15:49.767 "name": "BaseBdev2", 00:15:49.767 "aliases": [ 00:15:49.767 "d82f9657-8427-400b-9b18-9a42a84d4bee" 00:15:49.767 ], 00:15:49.767 "product_name": "Malloc disk", 00:15:49.767 "block_size": 512, 00:15:49.767 "num_blocks": 65536, 00:15:49.767 "uuid": "d82f9657-8427-400b-9b18-9a42a84d4bee", 00:15:49.767 "assigned_rate_limits": { 00:15:49.767 "rw_ios_per_sec": 0, 00:15:49.767 "rw_mbytes_per_sec": 0, 00:15:49.767 "r_mbytes_per_sec": 0, 00:15:49.767 "w_mbytes_per_sec": 0 00:15:49.767 }, 00:15:49.767 "claimed": true, 00:15:49.767 "claim_type": "exclusive_write", 00:15:49.767 "zoned": false, 00:15:49.767 "supported_io_types": { 00:15:49.767 "read": true, 00:15:49.767 "write": true, 00:15:49.767 "unmap": true, 00:15:49.767 "write_zeroes": true, 00:15:49.767 "flush": true, 00:15:49.767 "reset": true, 00:15:49.767 "compare": false, 00:15:49.767 "compare_and_write": false, 00:15:49.767 "abort": true, 00:15:49.767 "nvme_admin": false, 00:15:49.767 "nvme_io": false 00:15:49.767 }, 00:15:49.767 "memory_domains": [ 00:15:49.767 { 00:15:49.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.767 "dma_device_type": 2 00:15:49.767 } 00:15:49.767 ], 00:15:49.767 "driver_specific": {} 00:15:49.767 } 00:15:49.767 ] 00:15:49.767 16:54:38 -- common/autotest_common.sh@905 -- # return 0 00:15:49.767 16:54:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:49.767 16:54:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:49.767 16:54:38 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:49.767 16:54:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:49.767 16:54:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:49.767 16:54:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:49.767 16:54:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:49.767 16:54:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:49.767 16:54:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:49.767 16:54:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:49.767 16:54:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:49.767 16:54:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:49.767 16:54:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.768 16:54:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.043 16:54:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:50.043 "name": "Existed_Raid", 00:15:50.043 "uuid": "c22e8b54-9d0f-4cf7-bff6-a5132d0042bd", 00:15:50.043 "strip_size_kb": 0, 00:15:50.043 "state": "online", 00:15:50.043 "raid_level": "raid1", 00:15:50.043 "superblock": true, 00:15:50.043 "num_base_bdevs": 2, 00:15:50.043 "num_base_bdevs_discovered": 2, 00:15:50.043 "num_base_bdevs_operational": 2, 00:15:50.043 "base_bdevs_list": [ 00:15:50.043 { 00:15:50.043 "name": "BaseBdev1", 00:15:50.043 "uuid": "4442f063-bdf4-4d42-9f87-40c9223cfce7", 00:15:50.043 "is_configured": true, 00:15:50.043 "data_offset": 2048, 00:15:50.043 "data_size": 63488 00:15:50.043 }, 00:15:50.043 { 00:15:50.043 "name": "BaseBdev2", 00:15:50.043 "uuid": "d82f9657-8427-400b-9b18-9a42a84d4bee", 00:15:50.043 "is_configured": true, 00:15:50.043 "data_offset": 2048, 00:15:50.043 "data_size": 63488 00:15:50.043 } 00:15:50.043 ] 00:15:50.043 }' 00:15:50.043 16:54:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:50.043 16:54:38 -- common/autotest_common.sh@10 -- # set +x 00:15:50.611 16:54:39 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:50.869 [2024-11-05 16:54:39.602657] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:50.869 16:54:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:50.869 16:54:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:50.869 16:54:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:50.869 16:54:39 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:50.869 16:54:39 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:50.869 16:54:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:50.869 16:54:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:50.869 16:54:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:50.869 16:54:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:50.869 16:54:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:50.869 16:54:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:50.869 16:54:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:50.869 16:54:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:50.869 16:54:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:50.869 16:54:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:50.869 16:54:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.869 16:54:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.134 16:54:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.134 "name": "Existed_Raid", 00:15:51.134 "uuid": "c22e8b54-9d0f-4cf7-bff6-a5132d0042bd", 00:15:51.134 "strip_size_kb": 0, 00:15:51.134 "state": "online", 00:15:51.134 "raid_level": "raid1", 00:15:51.134 "superblock": true, 00:15:51.134 "num_base_bdevs": 2, 00:15:51.134 "num_base_bdevs_discovered": 1, 00:15:51.134 "num_base_bdevs_operational": 1, 00:15:51.134 "base_bdevs_list": [ 00:15:51.134 { 00:15:51.134 "name": null, 00:15:51.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.134 "is_configured": false, 00:15:51.134 "data_offset": 2048, 00:15:51.134 "data_size": 63488 00:15:51.134 }, 00:15:51.134 { 00:15:51.134 "name": "BaseBdev2", 00:15:51.134 "uuid": "d82f9657-8427-400b-9b18-9a42a84d4bee", 00:15:51.134 "is_configured": true, 00:15:51.134 "data_offset": 2048, 00:15:51.134 "data_size": 63488 00:15:51.134 } 00:15:51.134 ] 00:15:51.134 }' 00:15:51.134 16:54:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.134 16:54:39 -- common/autotest_common.sh@10 -- # set +x 00:15:51.702 16:54:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:51.702 16:54:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:51.702 16:54:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.702 16:54:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:51.961 16:54:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:51.961 16:54:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:51.961 16:54:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:52.220 [2024-11-05 16:54:40.996911] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:52.220 [2024-11-05 16:54:40.996961] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.220 [2024-11-05 16:54:40.997050] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.220 [2024-11-05 16:54:41.070391] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.220 [2024-11-05 16:54:41.070421] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:15:52.220 16:54:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:52.220 16:54:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:52.220 16:54:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.220 16:54:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:52.478 16:54:41 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:52.478 16:54:41 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:52.478 16:54:41 -- bdev/bdev_raid.sh@287 -- # killprocess 114163 00:15:52.478 16:54:41 -- common/autotest_common.sh@936 -- # '[' -z 114163 ']' 00:15:52.478 16:54:41 -- common/autotest_common.sh@940 -- # kill -0 114163 00:15:52.478 16:54:41 -- common/autotest_common.sh@941 -- # uname 00:15:52.478 16:54:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:52.478 16:54:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114163 00:15:52.478 16:54:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:52.478 16:54:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:52.479 16:54:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114163' 00:15:52.479 killing process with pid 114163 00:15:52.479 16:54:41 -- common/autotest_common.sh@955 -- # kill 114163 00:15:52.479 [2024-11-05 16:54:41.320868] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.479 [2024-11-05 16:54:41.320970] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:52.479 16:54:41 -- common/autotest_common.sh@960 -- # wait 114163 00:15:53.856 ************************************ 00:15:53.856 END TEST raid_state_function_test_sb 00:15:53.856 ************************************ 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:53.856 00:15:53.856 real 0m10.835s 00:15:53.856 user 0m18.900s 00:15:53.856 sys 0m1.247s 00:15:53.856 16:54:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:53.856 16:54:42 -- common/autotest_common.sh@10 -- # set +x 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:15:53.856 16:54:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:53.856 16:54:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:53.856 16:54:42 -- common/autotest_common.sh@10 -- # set +x 00:15:53.856 ************************************ 00:15:53.856 START TEST raid_superblock_test 00:15:53.856 ************************************ 00:15:53.856 16:54:42 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 2 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@357 -- # raid_pid=114494 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@358 -- # waitforlisten 114494 /var/tmp/spdk-raid.sock 00:15:53.856 16:54:42 -- common/autotest_common.sh@829 -- # '[' -z 114494 ']' 00:15:53.856 16:54:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:53.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:53.856 16:54:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.856 16:54:42 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:53.856 16:54:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:53.856 16:54:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.856 16:54:42 -- common/autotest_common.sh@10 -- # set +x 00:15:53.856 [2024-11-05 16:54:42.436814] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:53.856 [2024-11-05 16:54:42.437111] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114494 ] 00:15:53.856 [2024-11-05 16:54:42.606121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.114 [2024-11-05 16:54:42.794320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.114 [2024-11-05 16:54:42.972180] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.680 16:54:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.680 16:54:43 -- common/autotest_common.sh@862 -- # return 0 00:15:54.680 16:54:43 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:54.680 16:54:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:54.680 16:54:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:54.680 16:54:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:54.680 16:54:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:54.680 16:54:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:54.680 16:54:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:54.680 16:54:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:54.680 16:54:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:54.938 malloc1 00:15:54.938 16:54:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:55.198 [2024-11-05 16:54:43.933634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:55.198 [2024-11-05 16:54:43.933769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.198 [2024-11-05 16:54:43.933803] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:55.198 [2024-11-05 16:54:43.933851] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.198 [2024-11-05 16:54:43.936617] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.198 [2024-11-05 16:54:43.936686] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:55.198 pt1 00:15:55.198 16:54:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:55.198 16:54:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:55.198 16:54:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:55.198 16:54:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:55.198 16:54:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:55.198 16:54:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:55.198 16:54:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:55.198 16:54:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:55.198 16:54:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:55.457 malloc2 00:15:55.457 16:54:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:55.715 [2024-11-05 16:54:44.456499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:55.715 [2024-11-05 16:54:44.456607] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.715 [2024-11-05 16:54:44.456649] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:55.715 [2024-11-05 16:54:44.456700] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.715 [2024-11-05 16:54:44.459086] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.715 [2024-11-05 16:54:44.459136] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:55.715 pt2 00:15:55.715 16:54:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:55.715 16:54:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:55.715 16:54:44 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:55.974 [2024-11-05 16:54:44.660600] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:55.974 [2024-11-05 16:54:44.662612] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.974 [2024-11-05 16:54:44.662819] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:15:55.974 [2024-11-05 16:54:44.662834] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:55.974 [2024-11-05 16:54:44.663003] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:55.974 [2024-11-05 16:54:44.663414] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:15:55.974 [2024-11-05 16:54:44.663429] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:15:55.974 [2024-11-05 16:54:44.663593] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.974 16:54:44 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:55.974 16:54:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:55.974 16:54:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:55.974 16:54:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:55.974 16:54:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:55.974 16:54:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:55.974 16:54:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:55.974 16:54:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:55.974 16:54:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:55.974 16:54:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:55.974 16:54:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.974 16:54:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.233 16:54:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:56.233 "name": "raid_bdev1", 00:15:56.233 "uuid": "ebb0e1be-609a-4d82-aaff-8bb2512ed3cd", 00:15:56.233 "strip_size_kb": 0, 00:15:56.233 "state": "online", 00:15:56.233 "raid_level": "raid1", 00:15:56.233 "superblock": true, 00:15:56.233 "num_base_bdevs": 2, 00:15:56.233 "num_base_bdevs_discovered": 2, 00:15:56.233 "num_base_bdevs_operational": 2, 00:15:56.233 "base_bdevs_list": [ 00:15:56.233 { 00:15:56.233 "name": "pt1", 00:15:56.233 "uuid": "a06e0991-266e-5738-9661-c3eef6e8c166", 00:15:56.233 "is_configured": true, 00:15:56.233 "data_offset": 2048, 00:15:56.233 "data_size": 63488 00:15:56.233 }, 00:15:56.233 { 00:15:56.233 "name": "pt2", 00:15:56.233 "uuid": "04cadf46-41b7-592a-a208-47ca16e9e39e", 00:15:56.233 "is_configured": true, 00:15:56.233 "data_offset": 2048, 00:15:56.233 "data_size": 63488 00:15:56.233 } 00:15:56.233 ] 00:15:56.233 }' 00:15:56.233 16:54:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:56.233 16:54:44 -- common/autotest_common.sh@10 -- # set +x 00:15:56.799 16:54:45 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:56.800 16:54:45 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:57.058 [2024-11-05 16:54:45.773074] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.058 16:54:45 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=ebb0e1be-609a-4d82-aaff-8bb2512ed3cd 00:15:57.058 16:54:45 -- bdev/bdev_raid.sh@380 -- # '[' -z ebb0e1be-609a-4d82-aaff-8bb2512ed3cd ']' 00:15:57.058 16:54:45 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:57.317 [2024-11-05 16:54:46.020890] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.317 [2024-11-05 16:54:46.020920] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.317 [2024-11-05 16:54:46.021010] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.317 [2024-11-05 16:54:46.021085] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.317 [2024-11-05 16:54:46.021097] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:15:57.317 16:54:46 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.317 16:54:46 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:57.576 16:54:46 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:57.576 16:54:46 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:57.576 16:54:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:57.576 16:54:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:57.835 16:54:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:57.835 16:54:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:58.094 16:54:46 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:58.094 16:54:46 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:58.094 16:54:46 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:58.094 16:54:46 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:58.094 16:54:46 -- common/autotest_common.sh@650 -- # local es=0 00:15:58.094 16:54:46 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:58.094 16:54:46 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:58.094 16:54:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:58.094 16:54:46 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:58.094 16:54:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:58.094 16:54:46 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:58.094 16:54:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:58.094 16:54:46 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:58.094 16:54:46 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:58.094 16:54:46 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:58.353 [2024-11-05 16:54:47.153102] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:58.353 [2024-11-05 16:54:47.155187] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:58.353 [2024-11-05 16:54:47.155334] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:58.353 [2024-11-05 16:54:47.155459] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:58.353 [2024-11-05 16:54:47.155521] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.353 [2024-11-05 16:54:47.155533] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:15:58.353 request: 00:15:58.353 { 00:15:58.353 "name": "raid_bdev1", 00:15:58.353 "raid_level": "raid1", 00:15:58.353 "base_bdevs": [ 00:15:58.353 "malloc1", 00:15:58.353 "malloc2" 00:15:58.353 ], 00:15:58.353 "superblock": false, 00:15:58.353 "method": "bdev_raid_create", 00:15:58.353 "req_id": 1 00:15:58.353 } 00:15:58.353 Got JSON-RPC error response 00:15:58.353 response: 00:15:58.353 { 00:15:58.353 "code": -17, 00:15:58.353 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:58.353 } 00:15:58.353 16:54:47 -- common/autotest_common.sh@653 -- # es=1 00:15:58.353 16:54:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:58.353 16:54:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:58.353 16:54:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:58.353 16:54:47 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.353 16:54:47 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:58.612 16:54:47 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:58.612 16:54:47 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:58.612 16:54:47 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:58.870 [2024-11-05 16:54:47.609166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:58.870 [2024-11-05 16:54:47.609305] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.870 [2024-11-05 16:54:47.609345] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:58.870 [2024-11-05 16:54:47.609403] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.871 [2024-11-05 16:54:47.611844] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.871 [2024-11-05 16:54:47.611919] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:58.871 [2024-11-05 16:54:47.612050] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:58.871 [2024-11-05 16:54:47.612137] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:58.871 pt1 00:15:58.871 16:54:47 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:58.871 16:54:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:58.871 16:54:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:58.871 16:54:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:58.871 16:54:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:58.871 16:54:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:58.871 16:54:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:58.871 16:54:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:58.871 16:54:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:58.871 16:54:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:58.871 16:54:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.871 16:54:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.129 16:54:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:59.129 "name": "raid_bdev1", 00:15:59.129 "uuid": "ebb0e1be-609a-4d82-aaff-8bb2512ed3cd", 00:15:59.129 "strip_size_kb": 0, 00:15:59.129 "state": "configuring", 00:15:59.129 "raid_level": "raid1", 00:15:59.129 "superblock": true, 00:15:59.129 "num_base_bdevs": 2, 00:15:59.129 "num_base_bdevs_discovered": 1, 00:15:59.129 "num_base_bdevs_operational": 2, 00:15:59.129 "base_bdevs_list": [ 00:15:59.129 { 00:15:59.129 "name": "pt1", 00:15:59.129 "uuid": "a06e0991-266e-5738-9661-c3eef6e8c166", 00:15:59.129 "is_configured": true, 00:15:59.129 "data_offset": 2048, 00:15:59.129 "data_size": 63488 00:15:59.129 }, 00:15:59.129 { 00:15:59.129 "name": null, 00:15:59.129 "uuid": "04cadf46-41b7-592a-a208-47ca16e9e39e", 00:15:59.129 "is_configured": false, 00:15:59.129 "data_offset": 2048, 00:15:59.129 "data_size": 63488 00:15:59.129 } 00:15:59.129 ] 00:15:59.129 }' 00:15:59.129 16:54:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:59.129 16:54:47 -- common/autotest_common.sh@10 -- # set +x 00:15:59.697 16:54:48 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:59.697 16:54:48 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:59.697 16:54:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:59.697 16:54:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:59.955 [2024-11-05 16:54:48.697446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:59.955 [2024-11-05 16:54:48.697581] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.955 [2024-11-05 16:54:48.697619] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:59.955 [2024-11-05 16:54:48.697646] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.955 [2024-11-05 16:54:48.698256] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.955 [2024-11-05 16:54:48.698306] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:59.955 [2024-11-05 16:54:48.698410] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:59.955 [2024-11-05 16:54:48.698447] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:59.955 [2024-11-05 16:54:48.698599] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:15:59.955 [2024-11-05 16:54:48.698623] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:59.955 [2024-11-05 16:54:48.698750] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:59.955 [2024-11-05 16:54:48.699172] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:15:59.955 [2024-11-05 16:54:48.699198] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:15:59.955 [2024-11-05 16:54:48.699399] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.955 pt2 00:15:59.955 16:54:48 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:59.955 16:54:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:59.955 16:54:48 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:59.955 16:54:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:59.955 16:54:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:59.955 16:54:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:59.955 16:54:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:59.955 16:54:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:59.955 16:54:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:59.955 16:54:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:59.955 16:54:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:59.955 16:54:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:59.955 16:54:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.955 16:54:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.214 16:54:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:00.214 "name": "raid_bdev1", 00:16:00.214 "uuid": "ebb0e1be-609a-4d82-aaff-8bb2512ed3cd", 00:16:00.214 "strip_size_kb": 0, 00:16:00.214 "state": "online", 00:16:00.214 "raid_level": "raid1", 00:16:00.214 "superblock": true, 00:16:00.214 "num_base_bdevs": 2, 00:16:00.214 "num_base_bdevs_discovered": 2, 00:16:00.214 "num_base_bdevs_operational": 2, 00:16:00.214 "base_bdevs_list": [ 00:16:00.214 { 00:16:00.214 "name": "pt1", 00:16:00.214 "uuid": "a06e0991-266e-5738-9661-c3eef6e8c166", 00:16:00.214 "is_configured": true, 00:16:00.214 "data_offset": 2048, 00:16:00.214 "data_size": 63488 00:16:00.214 }, 00:16:00.214 { 00:16:00.214 "name": "pt2", 00:16:00.214 "uuid": "04cadf46-41b7-592a-a208-47ca16e9e39e", 00:16:00.214 "is_configured": true, 00:16:00.214 "data_offset": 2048, 00:16:00.214 "data_size": 63488 00:16:00.214 } 00:16:00.214 ] 00:16:00.214 }' 00:16:00.214 16:54:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:00.214 16:54:48 -- common/autotest_common.sh@10 -- # set +x 00:16:00.785 16:54:49 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:00.785 16:54:49 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:01.043 [2024-11-05 16:54:49.829855] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.043 16:54:49 -- bdev/bdev_raid.sh@430 -- # '[' ebb0e1be-609a-4d82-aaff-8bb2512ed3cd '!=' ebb0e1be-609a-4d82-aaff-8bb2512ed3cd ']' 00:16:01.043 16:54:49 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:16:01.043 16:54:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:01.043 16:54:49 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:01.043 16:54:49 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:01.301 [2024-11-05 16:54:50.081784] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:01.301 16:54:50 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:01.301 16:54:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:01.301 16:54:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:01.301 16:54:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:01.301 16:54:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:01.301 16:54:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:01.301 16:54:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:01.301 16:54:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:01.301 16:54:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:01.301 16:54:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:01.301 16:54:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.301 16:54:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.558 16:54:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:01.558 "name": "raid_bdev1", 00:16:01.558 "uuid": "ebb0e1be-609a-4d82-aaff-8bb2512ed3cd", 00:16:01.558 "strip_size_kb": 0, 00:16:01.558 "state": "online", 00:16:01.558 "raid_level": "raid1", 00:16:01.558 "superblock": true, 00:16:01.558 "num_base_bdevs": 2, 00:16:01.558 "num_base_bdevs_discovered": 1, 00:16:01.558 "num_base_bdevs_operational": 1, 00:16:01.558 "base_bdevs_list": [ 00:16:01.558 { 00:16:01.558 "name": null, 00:16:01.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.558 "is_configured": false, 00:16:01.558 "data_offset": 2048, 00:16:01.558 "data_size": 63488 00:16:01.558 }, 00:16:01.558 { 00:16:01.558 "name": "pt2", 00:16:01.558 "uuid": "04cadf46-41b7-592a-a208-47ca16e9e39e", 00:16:01.558 "is_configured": true, 00:16:01.558 "data_offset": 2048, 00:16:01.558 "data_size": 63488 00:16:01.558 } 00:16:01.558 ] 00:16:01.558 }' 00:16:01.558 16:54:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:01.558 16:54:50 -- common/autotest_common.sh@10 -- # set +x 00:16:02.124 16:54:50 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:02.382 [2024-11-05 16:54:51.218007] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.382 [2024-11-05 16:54:51.218041] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:02.382 [2024-11-05 16:54:51.218127] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.382 [2024-11-05 16:54:51.218179] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.382 [2024-11-05 16:54:51.218190] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:16:02.382 16:54:51 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.382 16:54:51 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:16:02.640 16:54:51 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:16:02.640 16:54:51 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:16:02.640 16:54:51 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:16:02.640 16:54:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:02.640 16:54:51 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:02.899 16:54:51 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:02.899 16:54:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:02.899 16:54:51 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:16:02.899 16:54:51 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:02.899 16:54:51 -- bdev/bdev_raid.sh@462 -- # i=1 00:16:02.899 16:54:51 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:03.157 [2024-11-05 16:54:51.910172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:03.157 [2024-11-05 16:54:51.910281] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.157 [2024-11-05 16:54:51.910315] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:03.157 [2024-11-05 16:54:51.910348] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.157 [2024-11-05 16:54:51.912867] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.157 [2024-11-05 16:54:51.912937] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:03.157 [2024-11-05 16:54:51.913070] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:03.157 [2024-11-05 16:54:51.913127] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:03.157 [2024-11-05 16:54:51.913268] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:16:03.157 [2024-11-05 16:54:51.913281] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:03.157 [2024-11-05 16:54:51.913373] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:03.157 [2024-11-05 16:54:51.913777] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:16:03.157 [2024-11-05 16:54:51.913802] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:16:03.157 [2024-11-05 16:54:51.913940] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.157 pt2 00:16:03.157 16:54:51 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:03.157 16:54:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:03.157 16:54:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:03.157 16:54:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:03.157 16:54:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:03.157 16:54:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:03.157 16:54:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.157 16:54:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.157 16:54:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.157 16:54:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.157 16:54:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.158 16:54:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.416 16:54:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.416 "name": "raid_bdev1", 00:16:03.416 "uuid": "ebb0e1be-609a-4d82-aaff-8bb2512ed3cd", 00:16:03.416 "strip_size_kb": 0, 00:16:03.416 "state": "online", 00:16:03.416 "raid_level": "raid1", 00:16:03.416 "superblock": true, 00:16:03.416 "num_base_bdevs": 2, 00:16:03.416 "num_base_bdevs_discovered": 1, 00:16:03.416 "num_base_bdevs_operational": 1, 00:16:03.416 "base_bdevs_list": [ 00:16:03.416 { 00:16:03.416 "name": null, 00:16:03.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.416 "is_configured": false, 00:16:03.416 "data_offset": 2048, 00:16:03.416 "data_size": 63488 00:16:03.416 }, 00:16:03.416 { 00:16:03.416 "name": "pt2", 00:16:03.416 "uuid": "04cadf46-41b7-592a-a208-47ca16e9e39e", 00:16:03.416 "is_configured": true, 00:16:03.416 "data_offset": 2048, 00:16:03.416 "data_size": 63488 00:16:03.416 } 00:16:03.416 ] 00:16:03.416 }' 00:16:03.416 16:54:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.416 16:54:52 -- common/autotest_common.sh@10 -- # set +x 00:16:03.985 16:54:52 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:16:03.985 16:54:52 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:03.985 16:54:52 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:16:04.243 [2024-11-05 16:54:52.926630] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.243 16:54:52 -- bdev/bdev_raid.sh@506 -- # '[' ebb0e1be-609a-4d82-aaff-8bb2512ed3cd '!=' ebb0e1be-609a-4d82-aaff-8bb2512ed3cd ']' 00:16:04.243 16:54:52 -- bdev/bdev_raid.sh@511 -- # killprocess 114494 00:16:04.243 16:54:52 -- common/autotest_common.sh@936 -- # '[' -z 114494 ']' 00:16:04.243 16:54:52 -- common/autotest_common.sh@940 -- # kill -0 114494 00:16:04.243 16:54:52 -- common/autotest_common.sh@941 -- # uname 00:16:04.243 16:54:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:04.243 16:54:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114494 00:16:04.243 killing process with pid 114494 00:16:04.243 16:54:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:04.243 16:54:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:04.243 16:54:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114494' 00:16:04.243 16:54:52 -- common/autotest_common.sh@955 -- # kill 114494 00:16:04.243 [2024-11-05 16:54:52.966745] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:04.243 16:54:52 -- common/autotest_common.sh@960 -- # wait 114494 00:16:04.243 [2024-11-05 16:54:52.966809] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.243 [2024-11-05 16:54:52.966857] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.243 [2024-11-05 16:54:52.966867] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:16:04.243 [2024-11-05 16:54:53.111363] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:05.619 16:54:54 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:05.619 00:16:05.619 real 0m11.721s 00:16:05.619 user 0m20.978s 00:16:05.619 sys 0m1.251s 00:16:05.619 ************************************ 00:16:05.619 END TEST raid_superblock_test 00:16:05.619 ************************************ 00:16:05.619 16:54:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:05.619 16:54:54 -- common/autotest_common.sh@10 -- # set +x 00:16:05.619 16:54:54 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:16:05.619 16:54:54 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:05.619 16:54:54 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:05.619 16:54:54 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:05.619 16:54:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:05.619 16:54:54 -- common/autotest_common.sh@10 -- # set +x 00:16:05.619 ************************************ 00:16:05.619 START TEST raid_state_function_test 00:16:05.619 ************************************ 00:16:05.619 16:54:54 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 false 00:16:05.619 16:54:54 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:05.619 16:54:54 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:05.619 16:54:54 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:05.619 16:54:54 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:05.619 16:54:54 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:05.619 16:54:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:05.619 16:54:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:05.619 16:54:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@226 -- # raid_pid=114851 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 114851' 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:05.620 Process raid pid: 114851 00:16:05.620 16:54:54 -- bdev/bdev_raid.sh@228 -- # waitforlisten 114851 /var/tmp/spdk-raid.sock 00:16:05.620 16:54:54 -- common/autotest_common.sh@829 -- # '[' -z 114851 ']' 00:16:05.620 16:54:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:05.620 16:54:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.620 16:54:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:05.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:05.620 16:54:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.620 16:54:54 -- common/autotest_common.sh@10 -- # set +x 00:16:05.620 [2024-11-05 16:54:54.230496] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:05.620 [2024-11-05 16:54:54.230989] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.620 [2024-11-05 16:54:54.395363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.889 [2024-11-05 16:54:54.563479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.889 [2024-11-05 16:54:54.746659] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.473 16:54:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.473 16:54:55 -- common/autotest_common.sh@862 -- # return 0 00:16:06.473 16:54:55 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:06.473 [2024-11-05 16:54:55.335131] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:06.473 [2024-11-05 16:54:55.335439] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:06.473 [2024-11-05 16:54:55.335565] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:06.473 [2024-11-05 16:54:55.335696] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:06.473 [2024-11-05 16:54:55.335799] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:06.473 [2024-11-05 16:54:55.335884] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:06.473 16:54:55 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:06.473 16:54:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:06.473 16:54:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:06.473 16:54:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:06.473 16:54:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:06.473 16:54:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:06.473 16:54:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:06.473 16:54:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:06.473 16:54:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:06.473 16:54:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:06.473 16:54:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.473 16:54:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.731 16:54:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:06.731 "name": "Existed_Raid", 00:16:06.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.731 "strip_size_kb": 64, 00:16:06.731 "state": "configuring", 00:16:06.731 "raid_level": "raid0", 00:16:06.731 "superblock": false, 00:16:06.731 "num_base_bdevs": 3, 00:16:06.731 "num_base_bdevs_discovered": 0, 00:16:06.731 "num_base_bdevs_operational": 3, 00:16:06.731 "base_bdevs_list": [ 00:16:06.731 { 00:16:06.731 "name": "BaseBdev1", 00:16:06.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.731 "is_configured": false, 00:16:06.731 "data_offset": 0, 00:16:06.731 "data_size": 0 00:16:06.731 }, 00:16:06.731 { 00:16:06.731 "name": "BaseBdev2", 00:16:06.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.731 "is_configured": false, 00:16:06.731 "data_offset": 0, 00:16:06.731 "data_size": 0 00:16:06.731 }, 00:16:06.731 { 00:16:06.731 "name": "BaseBdev3", 00:16:06.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.731 "is_configured": false, 00:16:06.731 "data_offset": 0, 00:16:06.731 "data_size": 0 00:16:06.731 } 00:16:06.731 ] 00:16:06.731 }' 00:16:06.731 16:54:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:06.731 16:54:55 -- common/autotest_common.sh@10 -- # set +x 00:16:07.667 16:54:56 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:07.667 [2024-11-05 16:54:56.411502] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:07.667 [2024-11-05 16:54:56.411689] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:07.667 16:54:56 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:07.925 [2024-11-05 16:54:56.659631] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:07.925 [2024-11-05 16:54:56.659856] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:07.925 [2024-11-05 16:54:56.659977] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:07.925 [2024-11-05 16:54:56.660111] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:07.925 [2024-11-05 16:54:56.660305] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:07.925 [2024-11-05 16:54:56.660391] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:07.925 16:54:56 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:08.183 [2024-11-05 16:54:56.943573] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.183 BaseBdev1 00:16:08.183 16:54:56 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:08.183 16:54:56 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:08.183 16:54:56 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:08.183 16:54:56 -- common/autotest_common.sh@899 -- # local i 00:16:08.183 16:54:56 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:08.183 16:54:56 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:08.183 16:54:56 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:08.441 16:54:57 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:08.699 [ 00:16:08.699 { 00:16:08.699 "name": "BaseBdev1", 00:16:08.699 "aliases": [ 00:16:08.699 "cd0f57d1-ddcc-4ae3-af71-f8ca31076f1f" 00:16:08.699 ], 00:16:08.699 "product_name": "Malloc disk", 00:16:08.699 "block_size": 512, 00:16:08.699 "num_blocks": 65536, 00:16:08.700 "uuid": "cd0f57d1-ddcc-4ae3-af71-f8ca31076f1f", 00:16:08.700 "assigned_rate_limits": { 00:16:08.700 "rw_ios_per_sec": 0, 00:16:08.700 "rw_mbytes_per_sec": 0, 00:16:08.700 "r_mbytes_per_sec": 0, 00:16:08.700 "w_mbytes_per_sec": 0 00:16:08.700 }, 00:16:08.700 "claimed": true, 00:16:08.700 "claim_type": "exclusive_write", 00:16:08.700 "zoned": false, 00:16:08.700 "supported_io_types": { 00:16:08.700 "read": true, 00:16:08.700 "write": true, 00:16:08.700 "unmap": true, 00:16:08.700 "write_zeroes": true, 00:16:08.700 "flush": true, 00:16:08.700 "reset": true, 00:16:08.700 "compare": false, 00:16:08.700 "compare_and_write": false, 00:16:08.700 "abort": true, 00:16:08.700 "nvme_admin": false, 00:16:08.700 "nvme_io": false 00:16:08.700 }, 00:16:08.700 "memory_domains": [ 00:16:08.700 { 00:16:08.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.700 "dma_device_type": 2 00:16:08.700 } 00:16:08.700 ], 00:16:08.700 "driver_specific": {} 00:16:08.700 } 00:16:08.700 ] 00:16:08.700 16:54:57 -- common/autotest_common.sh@905 -- # return 0 00:16:08.700 16:54:57 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:08.700 16:54:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:08.700 16:54:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:08.700 16:54:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:08.700 16:54:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:08.700 16:54:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:08.700 16:54:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:08.700 16:54:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:08.700 16:54:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:08.700 16:54:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:08.700 16:54:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.700 16:54:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.959 16:54:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:08.959 "name": "Existed_Raid", 00:16:08.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.959 "strip_size_kb": 64, 00:16:08.959 "state": "configuring", 00:16:08.959 "raid_level": "raid0", 00:16:08.959 "superblock": false, 00:16:08.959 "num_base_bdevs": 3, 00:16:08.959 "num_base_bdevs_discovered": 1, 00:16:08.959 "num_base_bdevs_operational": 3, 00:16:08.959 "base_bdevs_list": [ 00:16:08.959 { 00:16:08.959 "name": "BaseBdev1", 00:16:08.959 "uuid": "cd0f57d1-ddcc-4ae3-af71-f8ca31076f1f", 00:16:08.959 "is_configured": true, 00:16:08.959 "data_offset": 0, 00:16:08.959 "data_size": 65536 00:16:08.959 }, 00:16:08.959 { 00:16:08.959 "name": "BaseBdev2", 00:16:08.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.959 "is_configured": false, 00:16:08.959 "data_offset": 0, 00:16:08.959 "data_size": 0 00:16:08.959 }, 00:16:08.959 { 00:16:08.959 "name": "BaseBdev3", 00:16:08.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.959 "is_configured": false, 00:16:08.959 "data_offset": 0, 00:16:08.959 "data_size": 0 00:16:08.959 } 00:16:08.959 ] 00:16:08.959 }' 00:16:08.959 16:54:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:08.959 16:54:57 -- common/autotest_common.sh@10 -- # set +x 00:16:09.527 16:54:58 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:09.786 [2024-11-05 16:54:58.439921] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:09.786 [2024-11-05 16:54:58.440148] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:09.786 16:54:58 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:09.786 16:54:58 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:09.786 [2024-11-05 16:54:58.628014] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.786 [2024-11-05 16:54:58.630241] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:09.786 [2024-11-05 16:54:58.630453] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:09.786 [2024-11-05 16:54:58.630567] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:09.786 [2024-11-05 16:54:58.630684] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:09.786 16:54:58 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:09.786 16:54:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:09.786 16:54:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:09.786 16:54:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:09.786 16:54:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:09.786 16:54:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:09.786 16:54:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:09.786 16:54:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:09.786 16:54:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:09.786 16:54:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:09.786 16:54:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:09.786 16:54:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.786 16:54:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.786 16:54:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.045 16:54:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:10.045 "name": "Existed_Raid", 00:16:10.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.045 "strip_size_kb": 64, 00:16:10.045 "state": "configuring", 00:16:10.045 "raid_level": "raid0", 00:16:10.045 "superblock": false, 00:16:10.045 "num_base_bdevs": 3, 00:16:10.045 "num_base_bdevs_discovered": 1, 00:16:10.045 "num_base_bdevs_operational": 3, 00:16:10.045 "base_bdevs_list": [ 00:16:10.045 { 00:16:10.045 "name": "BaseBdev1", 00:16:10.045 "uuid": "cd0f57d1-ddcc-4ae3-af71-f8ca31076f1f", 00:16:10.045 "is_configured": true, 00:16:10.045 "data_offset": 0, 00:16:10.045 "data_size": 65536 00:16:10.045 }, 00:16:10.045 { 00:16:10.045 "name": "BaseBdev2", 00:16:10.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.045 "is_configured": false, 00:16:10.045 "data_offset": 0, 00:16:10.045 "data_size": 0 00:16:10.045 }, 00:16:10.045 { 00:16:10.045 "name": "BaseBdev3", 00:16:10.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.045 "is_configured": false, 00:16:10.045 "data_offset": 0, 00:16:10.045 "data_size": 0 00:16:10.045 } 00:16:10.045 ] 00:16:10.045 }' 00:16:10.045 16:54:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:10.045 16:54:58 -- common/autotest_common.sh@10 -- # set +x 00:16:10.981 16:54:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:10.981 [2024-11-05 16:54:59.786461] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.981 BaseBdev2 00:16:10.981 16:54:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:10.981 16:54:59 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:10.981 16:54:59 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:10.981 16:54:59 -- common/autotest_common.sh@899 -- # local i 00:16:10.981 16:54:59 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:10.981 16:54:59 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:10.981 16:54:59 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:11.244 16:55:00 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:11.501 [ 00:16:11.501 { 00:16:11.501 "name": "BaseBdev2", 00:16:11.501 "aliases": [ 00:16:11.501 "2196ed10-b500-401f-a1c4-2836e469052a" 00:16:11.501 ], 00:16:11.501 "product_name": "Malloc disk", 00:16:11.501 "block_size": 512, 00:16:11.501 "num_blocks": 65536, 00:16:11.501 "uuid": "2196ed10-b500-401f-a1c4-2836e469052a", 00:16:11.501 "assigned_rate_limits": { 00:16:11.501 "rw_ios_per_sec": 0, 00:16:11.501 "rw_mbytes_per_sec": 0, 00:16:11.501 "r_mbytes_per_sec": 0, 00:16:11.501 "w_mbytes_per_sec": 0 00:16:11.501 }, 00:16:11.501 "claimed": true, 00:16:11.501 "claim_type": "exclusive_write", 00:16:11.501 "zoned": false, 00:16:11.501 "supported_io_types": { 00:16:11.502 "read": true, 00:16:11.502 "write": true, 00:16:11.502 "unmap": true, 00:16:11.502 "write_zeroes": true, 00:16:11.502 "flush": true, 00:16:11.502 "reset": true, 00:16:11.502 "compare": false, 00:16:11.502 "compare_and_write": false, 00:16:11.502 "abort": true, 00:16:11.502 "nvme_admin": false, 00:16:11.502 "nvme_io": false 00:16:11.502 }, 00:16:11.502 "memory_domains": [ 00:16:11.502 { 00:16:11.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.502 "dma_device_type": 2 00:16:11.502 } 00:16:11.502 ], 00:16:11.502 "driver_specific": {} 00:16:11.502 } 00:16:11.502 ] 00:16:11.502 16:55:00 -- common/autotest_common.sh@905 -- # return 0 00:16:11.502 16:55:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:11.502 16:55:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:11.502 16:55:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:11.502 16:55:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:11.502 16:55:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:11.502 16:55:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:11.502 16:55:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:11.502 16:55:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:11.502 16:55:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:11.502 16:55:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:11.502 16:55:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:11.502 16:55:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:11.502 16:55:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.502 16:55:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.760 16:55:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:11.760 "name": "Existed_Raid", 00:16:11.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.760 "strip_size_kb": 64, 00:16:11.760 "state": "configuring", 00:16:11.760 "raid_level": "raid0", 00:16:11.760 "superblock": false, 00:16:11.760 "num_base_bdevs": 3, 00:16:11.760 "num_base_bdevs_discovered": 2, 00:16:11.760 "num_base_bdevs_operational": 3, 00:16:11.760 "base_bdevs_list": [ 00:16:11.760 { 00:16:11.760 "name": "BaseBdev1", 00:16:11.760 "uuid": "cd0f57d1-ddcc-4ae3-af71-f8ca31076f1f", 00:16:11.760 "is_configured": true, 00:16:11.760 "data_offset": 0, 00:16:11.760 "data_size": 65536 00:16:11.760 }, 00:16:11.760 { 00:16:11.760 "name": "BaseBdev2", 00:16:11.760 "uuid": "2196ed10-b500-401f-a1c4-2836e469052a", 00:16:11.760 "is_configured": true, 00:16:11.760 "data_offset": 0, 00:16:11.760 "data_size": 65536 00:16:11.760 }, 00:16:11.760 { 00:16:11.760 "name": "BaseBdev3", 00:16:11.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.760 "is_configured": false, 00:16:11.760 "data_offset": 0, 00:16:11.760 "data_size": 0 00:16:11.760 } 00:16:11.760 ] 00:16:11.760 }' 00:16:11.760 16:55:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:11.760 16:55:00 -- common/autotest_common.sh@10 -- # set +x 00:16:12.327 16:55:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:12.585 [2024-11-05 16:55:01.371739] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:12.585 [2024-11-05 16:55:01.372064] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:16:12.585 [2024-11-05 16:55:01.372125] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:12.585 [2024-11-05 16:55:01.372413] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:12.585 [2024-11-05 16:55:01.372957] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:16:12.585 [2024-11-05 16:55:01.373115] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:16:12.585 [2024-11-05 16:55:01.373515] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.585 BaseBdev3 00:16:12.585 16:55:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:12.585 16:55:01 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:12.585 16:55:01 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:12.585 16:55:01 -- common/autotest_common.sh@899 -- # local i 00:16:12.585 16:55:01 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:12.585 16:55:01 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:12.585 16:55:01 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:12.844 16:55:01 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:13.102 [ 00:16:13.102 { 00:16:13.102 "name": "BaseBdev3", 00:16:13.102 "aliases": [ 00:16:13.102 "5b7d0052-1639-453c-846a-00a1f6832d94" 00:16:13.102 ], 00:16:13.102 "product_name": "Malloc disk", 00:16:13.102 "block_size": 512, 00:16:13.102 "num_blocks": 65536, 00:16:13.102 "uuid": "5b7d0052-1639-453c-846a-00a1f6832d94", 00:16:13.102 "assigned_rate_limits": { 00:16:13.102 "rw_ios_per_sec": 0, 00:16:13.102 "rw_mbytes_per_sec": 0, 00:16:13.102 "r_mbytes_per_sec": 0, 00:16:13.102 "w_mbytes_per_sec": 0 00:16:13.102 }, 00:16:13.102 "claimed": true, 00:16:13.102 "claim_type": "exclusive_write", 00:16:13.102 "zoned": false, 00:16:13.102 "supported_io_types": { 00:16:13.102 "read": true, 00:16:13.102 "write": true, 00:16:13.102 "unmap": true, 00:16:13.102 "write_zeroes": true, 00:16:13.102 "flush": true, 00:16:13.102 "reset": true, 00:16:13.102 "compare": false, 00:16:13.102 "compare_and_write": false, 00:16:13.102 "abort": true, 00:16:13.102 "nvme_admin": false, 00:16:13.102 "nvme_io": false 00:16:13.102 }, 00:16:13.102 "memory_domains": [ 00:16:13.102 { 00:16:13.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.102 "dma_device_type": 2 00:16:13.102 } 00:16:13.102 ], 00:16:13.102 "driver_specific": {} 00:16:13.102 } 00:16:13.102 ] 00:16:13.102 16:55:01 -- common/autotest_common.sh@905 -- # return 0 00:16:13.102 16:55:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:13.102 16:55:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:13.102 16:55:01 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:13.102 16:55:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:13.102 16:55:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:13.102 16:55:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:13.102 16:55:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:13.102 16:55:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:13.102 16:55:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:13.102 16:55:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:13.102 16:55:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:13.102 16:55:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:13.102 16:55:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.102 16:55:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.668 16:55:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.668 "name": "Existed_Raid", 00:16:13.668 "uuid": "a46b01ae-a3a2-4fe7-8fd4-1eff3158e1d6", 00:16:13.668 "strip_size_kb": 64, 00:16:13.668 "state": "online", 00:16:13.668 "raid_level": "raid0", 00:16:13.668 "superblock": false, 00:16:13.668 "num_base_bdevs": 3, 00:16:13.668 "num_base_bdevs_discovered": 3, 00:16:13.668 "num_base_bdevs_operational": 3, 00:16:13.668 "base_bdevs_list": [ 00:16:13.668 { 00:16:13.668 "name": "BaseBdev1", 00:16:13.668 "uuid": "cd0f57d1-ddcc-4ae3-af71-f8ca31076f1f", 00:16:13.668 "is_configured": true, 00:16:13.668 "data_offset": 0, 00:16:13.668 "data_size": 65536 00:16:13.668 }, 00:16:13.668 { 00:16:13.668 "name": "BaseBdev2", 00:16:13.668 "uuid": "2196ed10-b500-401f-a1c4-2836e469052a", 00:16:13.668 "is_configured": true, 00:16:13.668 "data_offset": 0, 00:16:13.668 "data_size": 65536 00:16:13.668 }, 00:16:13.668 { 00:16:13.668 "name": "BaseBdev3", 00:16:13.668 "uuid": "5b7d0052-1639-453c-846a-00a1f6832d94", 00:16:13.668 "is_configured": true, 00:16:13.668 "data_offset": 0, 00:16:13.668 "data_size": 65536 00:16:13.668 } 00:16:13.668 ] 00:16:13.668 }' 00:16:13.668 16:55:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.668 16:55:02 -- common/autotest_common.sh@10 -- # set +x 00:16:14.233 16:55:02 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:14.491 [2024-11-05 16:55:03.252573] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:14.491 [2024-11-05 16:55:03.252770] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.491 [2024-11-05 16:55:03.252951] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.491 16:55:03 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:14.491 16:55:03 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:14.491 16:55:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:14.491 16:55:03 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:14.491 16:55:03 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:14.491 16:55:03 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:14.491 16:55:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:14.491 16:55:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:14.491 16:55:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:14.491 16:55:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:14.491 16:55:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:14.491 16:55:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:14.491 16:55:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:14.491 16:55:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:14.491 16:55:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:14.491 16:55:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.491 16:55:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.749 16:55:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:14.749 "name": "Existed_Raid", 00:16:14.749 "uuid": "a46b01ae-a3a2-4fe7-8fd4-1eff3158e1d6", 00:16:14.749 "strip_size_kb": 64, 00:16:14.749 "state": "offline", 00:16:14.749 "raid_level": "raid0", 00:16:14.749 "superblock": false, 00:16:14.749 "num_base_bdevs": 3, 00:16:14.749 "num_base_bdevs_discovered": 2, 00:16:14.749 "num_base_bdevs_operational": 2, 00:16:14.749 "base_bdevs_list": [ 00:16:14.749 { 00:16:14.749 "name": null, 00:16:14.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.749 "is_configured": false, 00:16:14.749 "data_offset": 0, 00:16:14.749 "data_size": 65536 00:16:14.749 }, 00:16:14.749 { 00:16:14.749 "name": "BaseBdev2", 00:16:14.749 "uuid": "2196ed10-b500-401f-a1c4-2836e469052a", 00:16:14.749 "is_configured": true, 00:16:14.749 "data_offset": 0, 00:16:14.749 "data_size": 65536 00:16:14.749 }, 00:16:14.749 { 00:16:14.749 "name": "BaseBdev3", 00:16:14.749 "uuid": "5b7d0052-1639-453c-846a-00a1f6832d94", 00:16:14.749 "is_configured": true, 00:16:14.749 "data_offset": 0, 00:16:14.749 "data_size": 65536 00:16:14.749 } 00:16:14.749 ] 00:16:14.749 }' 00:16:14.749 16:55:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:14.749 16:55:03 -- common/autotest_common.sh@10 -- # set +x 00:16:15.683 16:55:04 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:15.683 16:55:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:15.683 16:55:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.683 16:55:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:15.942 16:55:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:15.942 16:55:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:15.942 16:55:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:16.206 [2024-11-05 16:55:04.907793] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:16.206 16:55:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:16.206 16:55:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:16.206 16:55:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.206 16:55:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:16.565 16:55:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:16.565 16:55:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:16.566 16:55:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:16.825 [2024-11-05 16:55:05.504939] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:16.825 [2024-11-05 16:55:05.505166] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:16:16.825 16:55:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:16.825 16:55:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:16.825 16:55:05 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.825 16:55:05 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:17.084 16:55:05 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:17.084 16:55:05 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:17.084 16:55:05 -- bdev/bdev_raid.sh@287 -- # killprocess 114851 00:16:17.084 16:55:05 -- common/autotest_common.sh@936 -- # '[' -z 114851 ']' 00:16:17.084 16:55:05 -- common/autotest_common.sh@940 -- # kill -0 114851 00:16:17.084 16:55:05 -- common/autotest_common.sh@941 -- # uname 00:16:17.084 16:55:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:17.084 16:55:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114851 00:16:17.084 killing process with pid 114851 00:16:17.084 16:55:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:17.084 16:55:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:17.084 16:55:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114851' 00:16:17.084 16:55:05 -- common/autotest_common.sh@955 -- # kill 114851 00:16:17.084 16:55:05 -- common/autotest_common.sh@960 -- # wait 114851 00:16:17.084 [2024-11-05 16:55:05.879360] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:17.084 [2024-11-05 16:55:05.879500] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:18.019 ************************************ 00:16:18.019 END TEST raid_state_function_test 00:16:18.019 ************************************ 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:18.019 00:16:18.019 real 0m12.687s 00:16:18.019 user 0m22.574s 00:16:18.019 sys 0m1.428s 00:16:18.019 16:55:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:18.019 16:55:06 -- common/autotest_common.sh@10 -- # set +x 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:16:18.019 16:55:06 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:18.019 16:55:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:18.019 16:55:06 -- common/autotest_common.sh@10 -- # set +x 00:16:18.019 ************************************ 00:16:18.019 START TEST raid_state_function_test_sb 00:16:18.019 ************************************ 00:16:18.019 16:55:06 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 true 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:18.019 16:55:06 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:18.020 16:55:06 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:18.020 16:55:06 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:18.020 16:55:06 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:18.020 16:55:06 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:18.020 16:55:06 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:18.020 16:55:06 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:18.020 16:55:06 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:18.020 16:55:06 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:18.020 16:55:06 -- bdev/bdev_raid.sh@226 -- # raid_pid=115241 00:16:18.020 16:55:06 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:18.020 Process raid pid: 115241 00:16:18.020 16:55:06 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115241' 00:16:18.020 16:55:06 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115241 /var/tmp/spdk-raid.sock 00:16:18.020 16:55:06 -- common/autotest_common.sh@829 -- # '[' -z 115241 ']' 00:16:18.020 16:55:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:18.020 16:55:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:18.020 16:55:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:18.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:18.020 16:55:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:18.020 16:55:06 -- common/autotest_common.sh@10 -- # set +x 00:16:18.278 [2024-11-05 16:55:06.957745] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:18.278 [2024-11-05 16:55:06.958159] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.278 [2024-11-05 16:55:07.113044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.535 [2024-11-05 16:55:07.303736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.793 [2024-11-05 16:55:07.487888] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.051 16:55:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:19.051 16:55:07 -- common/autotest_common.sh@862 -- # return 0 00:16:19.051 16:55:07 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:19.309 [2024-11-05 16:55:08.160010] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:19.309 [2024-11-05 16:55:08.160443] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:19.309 [2024-11-05 16:55:08.160559] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:19.309 [2024-11-05 16:55:08.160635] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:19.309 [2024-11-05 16:55:08.160765] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:19.309 [2024-11-05 16:55:08.160872] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:19.309 16:55:08 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:19.309 16:55:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:19.309 16:55:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:19.309 16:55:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:19.309 16:55:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:19.309 16:55:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:19.309 16:55:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:19.309 16:55:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:19.309 16:55:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:19.309 16:55:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:19.309 16:55:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.309 16:55:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.567 16:55:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:19.567 "name": "Existed_Raid", 00:16:19.567 "uuid": "d4f043bc-5b39-4ea8-87d7-f7f57115230b", 00:16:19.567 "strip_size_kb": 64, 00:16:19.567 "state": "configuring", 00:16:19.567 "raid_level": "raid0", 00:16:19.567 "superblock": true, 00:16:19.567 "num_base_bdevs": 3, 00:16:19.567 "num_base_bdevs_discovered": 0, 00:16:19.567 "num_base_bdevs_operational": 3, 00:16:19.567 "base_bdevs_list": [ 00:16:19.567 { 00:16:19.567 "name": "BaseBdev1", 00:16:19.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.567 "is_configured": false, 00:16:19.567 "data_offset": 0, 00:16:19.567 "data_size": 0 00:16:19.567 }, 00:16:19.567 { 00:16:19.567 "name": "BaseBdev2", 00:16:19.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.567 "is_configured": false, 00:16:19.567 "data_offset": 0, 00:16:19.567 "data_size": 0 00:16:19.567 }, 00:16:19.567 { 00:16:19.567 "name": "BaseBdev3", 00:16:19.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.567 "is_configured": false, 00:16:19.567 "data_offset": 0, 00:16:19.567 "data_size": 0 00:16:19.567 } 00:16:19.567 ] 00:16:19.567 }' 00:16:19.567 16:55:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:19.567 16:55:08 -- common/autotest_common.sh@10 -- # set +x 00:16:20.501 16:55:09 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:20.501 [2024-11-05 16:55:09.360139] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:20.501 [2024-11-05 16:55:09.360445] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:20.501 16:55:09 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:20.759 [2024-11-05 16:55:09.620287] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:20.759 [2024-11-05 16:55:09.620557] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:20.759 [2024-11-05 16:55:09.620685] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:20.759 [2024-11-05 16:55:09.620841] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:20.759 [2024-11-05 16:55:09.620940] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:20.759 [2024-11-05 16:55:09.621058] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:20.759 16:55:09 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:21.323 [2024-11-05 16:55:09.924961] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.323 BaseBdev1 00:16:21.323 16:55:09 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:21.323 16:55:09 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:21.323 16:55:09 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:21.323 16:55:09 -- common/autotest_common.sh@899 -- # local i 00:16:21.323 16:55:09 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:21.323 16:55:09 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:21.323 16:55:09 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:21.323 16:55:10 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:21.581 [ 00:16:21.581 { 00:16:21.581 "name": "BaseBdev1", 00:16:21.581 "aliases": [ 00:16:21.581 "c1376525-26ee-4432-afb2-eeb9a6deb10d" 00:16:21.581 ], 00:16:21.581 "product_name": "Malloc disk", 00:16:21.581 "block_size": 512, 00:16:21.581 "num_blocks": 65536, 00:16:21.581 "uuid": "c1376525-26ee-4432-afb2-eeb9a6deb10d", 00:16:21.581 "assigned_rate_limits": { 00:16:21.581 "rw_ios_per_sec": 0, 00:16:21.581 "rw_mbytes_per_sec": 0, 00:16:21.581 "r_mbytes_per_sec": 0, 00:16:21.581 "w_mbytes_per_sec": 0 00:16:21.581 }, 00:16:21.581 "claimed": true, 00:16:21.581 "claim_type": "exclusive_write", 00:16:21.581 "zoned": false, 00:16:21.581 "supported_io_types": { 00:16:21.581 "read": true, 00:16:21.581 "write": true, 00:16:21.581 "unmap": true, 00:16:21.581 "write_zeroes": true, 00:16:21.581 "flush": true, 00:16:21.581 "reset": true, 00:16:21.581 "compare": false, 00:16:21.581 "compare_and_write": false, 00:16:21.581 "abort": true, 00:16:21.581 "nvme_admin": false, 00:16:21.581 "nvme_io": false 00:16:21.581 }, 00:16:21.581 "memory_domains": [ 00:16:21.581 { 00:16:21.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.581 "dma_device_type": 2 00:16:21.581 } 00:16:21.581 ], 00:16:21.581 "driver_specific": {} 00:16:21.581 } 00:16:21.581 ] 00:16:21.581 16:55:10 -- common/autotest_common.sh@905 -- # return 0 00:16:21.581 16:55:10 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:21.581 16:55:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:21.581 16:55:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:21.581 16:55:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:21.581 16:55:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:21.581 16:55:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:21.581 16:55:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:21.581 16:55:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:21.581 16:55:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:21.581 16:55:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:21.581 16:55:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.581 16:55:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.147 16:55:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:22.147 "name": "Existed_Raid", 00:16:22.147 "uuid": "54a3554b-da0a-4a47-815b-4e58d4cee969", 00:16:22.147 "strip_size_kb": 64, 00:16:22.147 "state": "configuring", 00:16:22.147 "raid_level": "raid0", 00:16:22.147 "superblock": true, 00:16:22.147 "num_base_bdevs": 3, 00:16:22.147 "num_base_bdevs_discovered": 1, 00:16:22.147 "num_base_bdevs_operational": 3, 00:16:22.147 "base_bdevs_list": [ 00:16:22.147 { 00:16:22.147 "name": "BaseBdev1", 00:16:22.147 "uuid": "c1376525-26ee-4432-afb2-eeb9a6deb10d", 00:16:22.147 "is_configured": true, 00:16:22.147 "data_offset": 2048, 00:16:22.147 "data_size": 63488 00:16:22.147 }, 00:16:22.147 { 00:16:22.147 "name": "BaseBdev2", 00:16:22.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.147 "is_configured": false, 00:16:22.147 "data_offset": 0, 00:16:22.147 "data_size": 0 00:16:22.147 }, 00:16:22.147 { 00:16:22.147 "name": "BaseBdev3", 00:16:22.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.147 "is_configured": false, 00:16:22.147 "data_offset": 0, 00:16:22.147 "data_size": 0 00:16:22.147 } 00:16:22.147 ] 00:16:22.147 }' 00:16:22.147 16:55:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:22.147 16:55:10 -- common/autotest_common.sh@10 -- # set +x 00:16:22.713 16:55:11 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:22.713 [2024-11-05 16:55:11.601440] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:22.713 [2024-11-05 16:55:11.601685] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:22.971 16:55:11 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:22.971 16:55:11 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:23.229 16:55:11 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:23.487 BaseBdev1 00:16:23.487 16:55:12 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:23.487 16:55:12 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:23.487 16:55:12 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:23.487 16:55:12 -- common/autotest_common.sh@899 -- # local i 00:16:23.487 16:55:12 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:23.487 16:55:12 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:23.487 16:55:12 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:23.745 16:55:12 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:23.745 [ 00:16:23.745 { 00:16:23.745 "name": "BaseBdev1", 00:16:23.745 "aliases": [ 00:16:23.745 "a50fa09e-dbb1-4ecd-a390-d68697ede72c" 00:16:23.745 ], 00:16:23.745 "product_name": "Malloc disk", 00:16:23.745 "block_size": 512, 00:16:23.745 "num_blocks": 65536, 00:16:23.745 "uuid": "a50fa09e-dbb1-4ecd-a390-d68697ede72c", 00:16:23.745 "assigned_rate_limits": { 00:16:23.745 "rw_ios_per_sec": 0, 00:16:23.745 "rw_mbytes_per_sec": 0, 00:16:23.745 "r_mbytes_per_sec": 0, 00:16:23.745 "w_mbytes_per_sec": 0 00:16:23.745 }, 00:16:23.745 "claimed": false, 00:16:23.745 "zoned": false, 00:16:23.745 "supported_io_types": { 00:16:23.745 "read": true, 00:16:23.745 "write": true, 00:16:23.745 "unmap": true, 00:16:23.745 "write_zeroes": true, 00:16:23.745 "flush": true, 00:16:23.745 "reset": true, 00:16:23.745 "compare": false, 00:16:23.745 "compare_and_write": false, 00:16:23.745 "abort": true, 00:16:23.745 "nvme_admin": false, 00:16:23.745 "nvme_io": false 00:16:23.745 }, 00:16:23.745 "memory_domains": [ 00:16:23.745 { 00:16:23.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.745 "dma_device_type": 2 00:16:23.745 } 00:16:23.745 ], 00:16:23.745 "driver_specific": {} 00:16:23.745 } 00:16:23.745 ] 00:16:24.003 16:55:12 -- common/autotest_common.sh@905 -- # return 0 00:16:24.003 16:55:12 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:24.003 [2024-11-05 16:55:12.882844] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.003 [2024-11-05 16:55:12.884840] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.003 [2024-11-05 16:55:12.885053] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.003 [2024-11-05 16:55:12.885184] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:24.004 [2024-11-05 16:55:12.885249] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:24.262 16:55:12 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:24.262 16:55:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:24.262 16:55:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:24.262 16:55:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:24.262 16:55:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:24.262 16:55:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:24.262 16:55:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:24.262 16:55:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:24.262 16:55:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:24.262 16:55:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:24.262 16:55:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:24.262 16:55:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:24.262 16:55:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.262 16:55:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.262 16:55:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:24.262 "name": "Existed_Raid", 00:16:24.262 "uuid": "71904ea4-0cc7-4a7c-aaf6-95a26cca830b", 00:16:24.262 "strip_size_kb": 64, 00:16:24.262 "state": "configuring", 00:16:24.262 "raid_level": "raid0", 00:16:24.262 "superblock": true, 00:16:24.262 "num_base_bdevs": 3, 00:16:24.262 "num_base_bdevs_discovered": 1, 00:16:24.262 "num_base_bdevs_operational": 3, 00:16:24.262 "base_bdevs_list": [ 00:16:24.262 { 00:16:24.262 "name": "BaseBdev1", 00:16:24.262 "uuid": "a50fa09e-dbb1-4ecd-a390-d68697ede72c", 00:16:24.262 "is_configured": true, 00:16:24.262 "data_offset": 2048, 00:16:24.262 "data_size": 63488 00:16:24.262 }, 00:16:24.262 { 00:16:24.262 "name": "BaseBdev2", 00:16:24.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.262 "is_configured": false, 00:16:24.262 "data_offset": 0, 00:16:24.262 "data_size": 0 00:16:24.262 }, 00:16:24.262 { 00:16:24.262 "name": "BaseBdev3", 00:16:24.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.262 "is_configured": false, 00:16:24.262 "data_offset": 0, 00:16:24.262 "data_size": 0 00:16:24.262 } 00:16:24.262 ] 00:16:24.262 }' 00:16:24.262 16:55:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:24.262 16:55:13 -- common/autotest_common.sh@10 -- # set +x 00:16:25.197 16:55:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:25.197 [2024-11-05 16:55:14.071630] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.197 BaseBdev2 00:16:25.197 16:55:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:25.197 16:55:14 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:25.197 16:55:14 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:25.197 16:55:14 -- common/autotest_common.sh@899 -- # local i 00:16:25.197 16:55:14 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:25.197 16:55:14 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:25.197 16:55:14 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:25.453 16:55:14 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:25.709 [ 00:16:25.709 { 00:16:25.709 "name": "BaseBdev2", 00:16:25.709 "aliases": [ 00:16:25.709 "3d38ba64-1b63-40bb-949f-7ddf40b4aa68" 00:16:25.709 ], 00:16:25.709 "product_name": "Malloc disk", 00:16:25.709 "block_size": 512, 00:16:25.709 "num_blocks": 65536, 00:16:25.709 "uuid": "3d38ba64-1b63-40bb-949f-7ddf40b4aa68", 00:16:25.709 "assigned_rate_limits": { 00:16:25.709 "rw_ios_per_sec": 0, 00:16:25.709 "rw_mbytes_per_sec": 0, 00:16:25.709 "r_mbytes_per_sec": 0, 00:16:25.709 "w_mbytes_per_sec": 0 00:16:25.709 }, 00:16:25.709 "claimed": true, 00:16:25.709 "claim_type": "exclusive_write", 00:16:25.709 "zoned": false, 00:16:25.709 "supported_io_types": { 00:16:25.709 "read": true, 00:16:25.709 "write": true, 00:16:25.709 "unmap": true, 00:16:25.709 "write_zeroes": true, 00:16:25.709 "flush": true, 00:16:25.709 "reset": true, 00:16:25.709 "compare": false, 00:16:25.709 "compare_and_write": false, 00:16:25.709 "abort": true, 00:16:25.709 "nvme_admin": false, 00:16:25.709 "nvme_io": false 00:16:25.709 }, 00:16:25.709 "memory_domains": [ 00:16:25.709 { 00:16:25.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.709 "dma_device_type": 2 00:16:25.709 } 00:16:25.709 ], 00:16:25.709 "driver_specific": {} 00:16:25.709 } 00:16:25.709 ] 00:16:25.709 16:55:14 -- common/autotest_common.sh@905 -- # return 0 00:16:25.709 16:55:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:25.709 16:55:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:25.709 16:55:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:25.709 16:55:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:25.709 16:55:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:25.709 16:55:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:25.709 16:55:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:25.709 16:55:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:25.709 16:55:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:25.709 16:55:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:25.709 16:55:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:25.709 16:55:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:25.709 16:55:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.709 16:55:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.966 16:55:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:25.966 "name": "Existed_Raid", 00:16:25.966 "uuid": "71904ea4-0cc7-4a7c-aaf6-95a26cca830b", 00:16:25.966 "strip_size_kb": 64, 00:16:25.966 "state": "configuring", 00:16:25.966 "raid_level": "raid0", 00:16:25.966 "superblock": true, 00:16:25.966 "num_base_bdevs": 3, 00:16:25.966 "num_base_bdevs_discovered": 2, 00:16:25.966 "num_base_bdevs_operational": 3, 00:16:25.966 "base_bdevs_list": [ 00:16:25.966 { 00:16:25.966 "name": "BaseBdev1", 00:16:25.966 "uuid": "a50fa09e-dbb1-4ecd-a390-d68697ede72c", 00:16:25.966 "is_configured": true, 00:16:25.966 "data_offset": 2048, 00:16:25.966 "data_size": 63488 00:16:25.966 }, 00:16:25.966 { 00:16:25.966 "name": "BaseBdev2", 00:16:25.966 "uuid": "3d38ba64-1b63-40bb-949f-7ddf40b4aa68", 00:16:25.966 "is_configured": true, 00:16:25.966 "data_offset": 2048, 00:16:25.966 "data_size": 63488 00:16:25.966 }, 00:16:25.966 { 00:16:25.966 "name": "BaseBdev3", 00:16:25.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.966 "is_configured": false, 00:16:25.966 "data_offset": 0, 00:16:25.966 "data_size": 0 00:16:25.966 } 00:16:25.966 ] 00:16:25.966 }' 00:16:25.966 16:55:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:25.966 16:55:14 -- common/autotest_common.sh@10 -- # set +x 00:16:26.899 16:55:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:26.899 [2024-11-05 16:55:15.776427] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:26.899 [2024-11-05 16:55:15.776869] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:26.899 BaseBdev3 00:16:26.899 [2024-11-05 16:55:15.778016] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:26.899 [2024-11-05 16:55:15.778283] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:26.899 [2024-11-05 16:55:15.778787] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:26.899 [2024-11-05 16:55:15.778996] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:16:26.899 [2024-11-05 16:55:15.779284] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.217 16:55:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:27.217 16:55:15 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:27.217 16:55:15 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:27.217 16:55:15 -- common/autotest_common.sh@899 -- # local i 00:16:27.217 16:55:15 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:27.217 16:55:15 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:27.217 16:55:15 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:27.217 16:55:16 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:27.475 [ 00:16:27.475 { 00:16:27.475 "name": "BaseBdev3", 00:16:27.475 "aliases": [ 00:16:27.475 "a7baa399-d30b-4c24-aa5a-a839c7280e3e" 00:16:27.475 ], 00:16:27.475 "product_name": "Malloc disk", 00:16:27.475 "block_size": 512, 00:16:27.475 "num_blocks": 65536, 00:16:27.475 "uuid": "a7baa399-d30b-4c24-aa5a-a839c7280e3e", 00:16:27.475 "assigned_rate_limits": { 00:16:27.475 "rw_ios_per_sec": 0, 00:16:27.475 "rw_mbytes_per_sec": 0, 00:16:27.475 "r_mbytes_per_sec": 0, 00:16:27.475 "w_mbytes_per_sec": 0 00:16:27.475 }, 00:16:27.475 "claimed": true, 00:16:27.475 "claim_type": "exclusive_write", 00:16:27.475 "zoned": false, 00:16:27.475 "supported_io_types": { 00:16:27.475 "read": true, 00:16:27.475 "write": true, 00:16:27.475 "unmap": true, 00:16:27.475 "write_zeroes": true, 00:16:27.475 "flush": true, 00:16:27.475 "reset": true, 00:16:27.475 "compare": false, 00:16:27.475 "compare_and_write": false, 00:16:27.475 "abort": true, 00:16:27.475 "nvme_admin": false, 00:16:27.475 "nvme_io": false 00:16:27.475 }, 00:16:27.475 "memory_domains": [ 00:16:27.475 { 00:16:27.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.475 "dma_device_type": 2 00:16:27.475 } 00:16:27.475 ], 00:16:27.475 "driver_specific": {} 00:16:27.475 } 00:16:27.475 ] 00:16:27.475 16:55:16 -- common/autotest_common.sh@905 -- # return 0 00:16:27.475 16:55:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:27.475 16:55:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:27.475 16:55:16 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:27.475 16:55:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:27.475 16:55:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:27.475 16:55:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:27.475 16:55:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:27.475 16:55:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:27.475 16:55:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:27.475 16:55:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:27.475 16:55:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:27.475 16:55:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:27.475 16:55:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.475 16:55:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.732 16:55:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:27.732 "name": "Existed_Raid", 00:16:27.732 "uuid": "71904ea4-0cc7-4a7c-aaf6-95a26cca830b", 00:16:27.732 "strip_size_kb": 64, 00:16:27.732 "state": "online", 00:16:27.732 "raid_level": "raid0", 00:16:27.732 "superblock": true, 00:16:27.732 "num_base_bdevs": 3, 00:16:27.732 "num_base_bdevs_discovered": 3, 00:16:27.732 "num_base_bdevs_operational": 3, 00:16:27.732 "base_bdevs_list": [ 00:16:27.732 { 00:16:27.732 "name": "BaseBdev1", 00:16:27.732 "uuid": "a50fa09e-dbb1-4ecd-a390-d68697ede72c", 00:16:27.732 "is_configured": true, 00:16:27.732 "data_offset": 2048, 00:16:27.732 "data_size": 63488 00:16:27.732 }, 00:16:27.732 { 00:16:27.732 "name": "BaseBdev2", 00:16:27.732 "uuid": "3d38ba64-1b63-40bb-949f-7ddf40b4aa68", 00:16:27.732 "is_configured": true, 00:16:27.732 "data_offset": 2048, 00:16:27.732 "data_size": 63488 00:16:27.732 }, 00:16:27.732 { 00:16:27.732 "name": "BaseBdev3", 00:16:27.732 "uuid": "a7baa399-d30b-4c24-aa5a-a839c7280e3e", 00:16:27.732 "is_configured": true, 00:16:27.732 "data_offset": 2048, 00:16:27.732 "data_size": 63488 00:16:27.732 } 00:16:27.732 ] 00:16:27.732 }' 00:16:27.732 16:55:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:27.732 16:55:16 -- common/autotest_common.sh@10 -- # set +x 00:16:28.296 16:55:17 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:28.554 [2024-11-05 16:55:17.288937] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:28.554 [2024-11-05 16:55:17.289226] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.554 [2024-11-05 16:55:17.289379] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.554 16:55:17 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:28.554 16:55:17 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:28.554 16:55:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:28.554 16:55:17 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:28.554 16:55:17 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:28.554 16:55:17 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:28.554 16:55:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:28.554 16:55:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:28.554 16:55:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:28.554 16:55:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:28.554 16:55:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:28.554 16:55:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:28.554 16:55:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:28.554 16:55:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:28.554 16:55:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:28.554 16:55:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.554 16:55:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.811 16:55:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:28.811 "name": "Existed_Raid", 00:16:28.811 "uuid": "71904ea4-0cc7-4a7c-aaf6-95a26cca830b", 00:16:28.811 "strip_size_kb": 64, 00:16:28.811 "state": "offline", 00:16:28.811 "raid_level": "raid0", 00:16:28.811 "superblock": true, 00:16:28.811 "num_base_bdevs": 3, 00:16:28.811 "num_base_bdevs_discovered": 2, 00:16:28.811 "num_base_bdevs_operational": 2, 00:16:28.811 "base_bdevs_list": [ 00:16:28.811 { 00:16:28.811 "name": null, 00:16:28.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.811 "is_configured": false, 00:16:28.811 "data_offset": 2048, 00:16:28.811 "data_size": 63488 00:16:28.811 }, 00:16:28.811 { 00:16:28.811 "name": "BaseBdev2", 00:16:28.811 "uuid": "3d38ba64-1b63-40bb-949f-7ddf40b4aa68", 00:16:28.811 "is_configured": true, 00:16:28.811 "data_offset": 2048, 00:16:28.811 "data_size": 63488 00:16:28.811 }, 00:16:28.811 { 00:16:28.811 "name": "BaseBdev3", 00:16:28.811 "uuid": "a7baa399-d30b-4c24-aa5a-a839c7280e3e", 00:16:28.811 "is_configured": true, 00:16:28.811 "data_offset": 2048, 00:16:28.811 "data_size": 63488 00:16:28.811 } 00:16:28.811 ] 00:16:28.811 }' 00:16:28.811 16:55:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:28.811 16:55:17 -- common/autotest_common.sh@10 -- # set +x 00:16:29.374 16:55:18 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:29.374 16:55:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:29.374 16:55:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:29.374 16:55:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.630 16:55:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:29.630 16:55:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:29.630 16:55:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:29.887 [2024-11-05 16:55:18.686465] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:29.887 16:55:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:29.887 16:55:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:29.887 16:55:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:29.887 16:55:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.145 16:55:19 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:30.145 16:55:19 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:30.145 16:55:19 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:30.403 [2024-11-05 16:55:19.223303] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:30.403 [2024-11-05 16:55:19.223535] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:16:30.661 16:55:19 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:30.661 16:55:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:30.661 16:55:19 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.662 16:55:19 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:30.924 16:55:19 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:30.924 16:55:19 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:30.924 16:55:19 -- bdev/bdev_raid.sh@287 -- # killprocess 115241 00:16:30.924 16:55:19 -- common/autotest_common.sh@936 -- # '[' -z 115241 ']' 00:16:30.924 16:55:19 -- common/autotest_common.sh@940 -- # kill -0 115241 00:16:30.924 16:55:19 -- common/autotest_common.sh@941 -- # uname 00:16:30.924 16:55:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:30.924 16:55:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115241 00:16:30.924 16:55:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:30.924 16:55:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:30.924 16:55:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115241' 00:16:30.924 killing process with pid 115241 00:16:30.924 16:55:19 -- common/autotest_common.sh@955 -- # kill 115241 00:16:30.924 [2024-11-05 16:55:19.597085] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:30.924 16:55:19 -- common/autotest_common.sh@960 -- # wait 115241 00:16:30.924 [2024-11-05 16:55:19.597426] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:31.863 ************************************ 00:16:31.863 END TEST raid_state_function_test_sb 00:16:31.863 ************************************ 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:31.863 00:16:31.863 real 0m13.691s 00:16:31.863 user 0m24.299s 00:16:31.863 sys 0m1.528s 00:16:31.863 16:55:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:31.863 16:55:20 -- common/autotest_common.sh@10 -- # set +x 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:16:31.863 16:55:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:31.863 16:55:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:31.863 16:55:20 -- common/autotest_common.sh@10 -- # set +x 00:16:31.863 ************************************ 00:16:31.863 START TEST raid_superblock_test 00:16:31.863 ************************************ 00:16:31.863 16:55:20 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 3 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@357 -- # raid_pid=115640 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@358 -- # waitforlisten 115640 /var/tmp/spdk-raid.sock 00:16:31.863 16:55:20 -- common/autotest_common.sh@829 -- # '[' -z 115640 ']' 00:16:31.863 16:55:20 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:31.863 16:55:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:31.863 16:55:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:31.863 16:55:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:31.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:31.863 16:55:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:31.863 16:55:20 -- common/autotest_common.sh@10 -- # set +x 00:16:31.863 [2024-11-05 16:55:20.711167] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:31.863 [2024-11-05 16:55:20.711734] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115640 ] 00:16:32.121 [2024-11-05 16:55:20.882943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.379 [2024-11-05 16:55:21.055131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.379 [2024-11-05 16:55:21.227161] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:32.970 16:55:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:32.970 16:55:21 -- common/autotest_common.sh@862 -- # return 0 00:16:32.970 16:55:21 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:32.970 16:55:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:32.970 16:55:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:32.970 16:55:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:32.970 16:55:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:32.970 16:55:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:32.970 16:55:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:32.970 16:55:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:32.970 16:55:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:33.229 malloc1 00:16:33.229 16:55:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:33.488 [2024-11-05 16:55:22.126370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:33.488 [2024-11-05 16:55:22.126674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.488 [2024-11-05 16:55:22.126897] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:16:33.488 [2024-11-05 16:55:22.127047] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.488 [2024-11-05 16:55:22.129660] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.488 [2024-11-05 16:55:22.129851] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:33.488 pt1 00:16:33.488 16:55:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:33.488 16:55:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:33.488 16:55:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:33.488 16:55:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:33.488 16:55:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:33.488 16:55:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:33.488 16:55:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:33.488 16:55:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:33.488 16:55:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:33.746 malloc2 00:16:33.746 16:55:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:33.746 [2024-11-05 16:55:22.593388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:33.746 [2024-11-05 16:55:22.593642] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.746 [2024-11-05 16:55:22.593789] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:33.746 [2024-11-05 16:55:22.593976] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.746 [2024-11-05 16:55:22.596655] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.746 [2024-11-05 16:55:22.596828] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:33.746 pt2 00:16:33.746 16:55:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:33.746 16:55:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:33.746 16:55:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:33.746 16:55:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:33.746 16:55:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:33.746 16:55:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:33.746 16:55:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:33.746 16:55:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:33.746 16:55:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:34.004 malloc3 00:16:34.004 16:55:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:34.262 [2024-11-05 16:55:23.078777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:34.262 [2024-11-05 16:55:23.079094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.262 [2024-11-05 16:55:23.079280] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:34.262 [2024-11-05 16:55:23.079480] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.262 [2024-11-05 16:55:23.081946] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.262 [2024-11-05 16:55:23.082144] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:34.262 pt3 00:16:34.262 16:55:23 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:34.262 16:55:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:34.262 16:55:23 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:34.520 [2024-11-05 16:55:23.278894] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:34.520 [2024-11-05 16:55:23.280959] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:34.520 [2024-11-05 16:55:23.281185] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:34.520 [2024-11-05 16:55:23.281414] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:16:34.520 [2024-11-05 16:55:23.281540] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:34.520 [2024-11-05 16:55:23.281697] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:34.520 [2024-11-05 16:55:23.282184] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:16:34.520 [2024-11-05 16:55:23.282344] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:16:34.520 [2024-11-05 16:55:23.282592] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.520 16:55:23 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:34.520 16:55:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:34.520 16:55:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:34.520 16:55:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:34.520 16:55:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:34.520 16:55:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:34.520 16:55:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:34.520 16:55:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:34.520 16:55:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:34.520 16:55:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:34.520 16:55:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.520 16:55:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.777 16:55:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:34.777 "name": "raid_bdev1", 00:16:34.777 "uuid": "2c0c85b6-8f35-446a-beb6-f0d1930b6880", 00:16:34.777 "strip_size_kb": 64, 00:16:34.777 "state": "online", 00:16:34.777 "raid_level": "raid0", 00:16:34.777 "superblock": true, 00:16:34.777 "num_base_bdevs": 3, 00:16:34.777 "num_base_bdevs_discovered": 3, 00:16:34.777 "num_base_bdevs_operational": 3, 00:16:34.777 "base_bdevs_list": [ 00:16:34.777 { 00:16:34.777 "name": "pt1", 00:16:34.777 "uuid": "c0f9e642-5170-5618-a908-902ee245d421", 00:16:34.777 "is_configured": true, 00:16:34.777 "data_offset": 2048, 00:16:34.777 "data_size": 63488 00:16:34.777 }, 00:16:34.777 { 00:16:34.777 "name": "pt2", 00:16:34.777 "uuid": "12782367-f2aa-5b13-909e-3db35fc11385", 00:16:34.777 "is_configured": true, 00:16:34.777 "data_offset": 2048, 00:16:34.777 "data_size": 63488 00:16:34.777 }, 00:16:34.777 { 00:16:34.777 "name": "pt3", 00:16:34.778 "uuid": "ad7f5c6f-28ec-5da2-bdaf-41226ccb9af5", 00:16:34.778 "is_configured": true, 00:16:34.778 "data_offset": 2048, 00:16:34.778 "data_size": 63488 00:16:34.778 } 00:16:34.778 ] 00:16:34.778 }' 00:16:34.778 16:55:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:34.778 16:55:23 -- common/autotest_common.sh@10 -- # set +x 00:16:35.342 16:55:24 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:35.342 16:55:24 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:35.600 [2024-11-05 16:55:24.379366] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.600 16:55:24 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=2c0c85b6-8f35-446a-beb6-f0d1930b6880 00:16:35.600 16:55:24 -- bdev/bdev_raid.sh@380 -- # '[' -z 2c0c85b6-8f35-446a-beb6-f0d1930b6880 ']' 00:16:35.600 16:55:24 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:35.860 [2024-11-05 16:55:24.583165] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:35.860 [2024-11-05 16:55:24.583334] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.860 [2024-11-05 16:55:24.583518] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.860 [2024-11-05 16:55:24.583721] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:35.860 [2024-11-05 16:55:24.583825] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:16:35.860 16:55:24 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.860 16:55:24 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:36.119 16:55:24 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:36.120 16:55:24 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:36.120 16:55:24 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:36.120 16:55:24 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:36.378 16:55:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:36.378 16:55:25 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:36.637 16:55:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:36.637 16:55:25 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:36.637 16:55:25 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:36.637 16:55:25 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:36.895 16:55:25 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:36.895 16:55:25 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:36.895 16:55:25 -- common/autotest_common.sh@650 -- # local es=0 00:16:36.895 16:55:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:36.895 16:55:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:36.895 16:55:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:36.895 16:55:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:36.895 16:55:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:36.895 16:55:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:36.895 16:55:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:36.895 16:55:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:36.895 16:55:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:36.895 16:55:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:37.154 [2024-11-05 16:55:25.879582] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:37.154 [2024-11-05 16:55:25.881716] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:37.154 [2024-11-05 16:55:25.881930] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:37.154 [2024-11-05 16:55:25.882027] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:37.154 [2024-11-05 16:55:25.882341] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:37.154 [2024-11-05 16:55:25.882525] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:37.154 [2024-11-05 16:55:25.882692] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.154 [2024-11-05 16:55:25.882792] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:16:37.154 request: 00:16:37.154 { 00:16:37.154 "name": "raid_bdev1", 00:16:37.154 "raid_level": "raid0", 00:16:37.154 "base_bdevs": [ 00:16:37.154 "malloc1", 00:16:37.154 "malloc2", 00:16:37.154 "malloc3" 00:16:37.154 ], 00:16:37.154 "superblock": false, 00:16:37.154 "strip_size_kb": 64, 00:16:37.154 "method": "bdev_raid_create", 00:16:37.154 "req_id": 1 00:16:37.154 } 00:16:37.154 Got JSON-RPC error response 00:16:37.154 response: 00:16:37.154 { 00:16:37.154 "code": -17, 00:16:37.154 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:37.154 } 00:16:37.154 16:55:25 -- common/autotest_common.sh@653 -- # es=1 00:16:37.154 16:55:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:37.154 16:55:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:37.154 16:55:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:37.154 16:55:25 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.154 16:55:25 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:37.412 16:55:26 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:37.412 16:55:26 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:37.412 16:55:26 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:37.670 [2024-11-05 16:55:26.319626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:37.670 [2024-11-05 16:55:26.319884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.670 [2024-11-05 16:55:26.320032] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:37.670 [2024-11-05 16:55:26.320168] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.670 [2024-11-05 16:55:26.322681] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.670 [2024-11-05 16:55:26.322915] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:37.671 [2024-11-05 16:55:26.323147] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:37.671 [2024-11-05 16:55:26.323326] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:37.671 pt1 00:16:37.671 16:55:26 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:37.671 16:55:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:37.671 16:55:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:37.671 16:55:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:37.671 16:55:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:37.671 16:55:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:37.671 16:55:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:37.671 16:55:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:37.671 16:55:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:37.671 16:55:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:37.671 16:55:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.671 16:55:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.929 16:55:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:37.929 "name": "raid_bdev1", 00:16:37.929 "uuid": "2c0c85b6-8f35-446a-beb6-f0d1930b6880", 00:16:37.929 "strip_size_kb": 64, 00:16:37.929 "state": "configuring", 00:16:37.929 "raid_level": "raid0", 00:16:37.929 "superblock": true, 00:16:37.929 "num_base_bdevs": 3, 00:16:37.929 "num_base_bdevs_discovered": 1, 00:16:37.929 "num_base_bdevs_operational": 3, 00:16:37.929 "base_bdevs_list": [ 00:16:37.929 { 00:16:37.929 "name": "pt1", 00:16:37.929 "uuid": "c0f9e642-5170-5618-a908-902ee245d421", 00:16:37.929 "is_configured": true, 00:16:37.929 "data_offset": 2048, 00:16:37.929 "data_size": 63488 00:16:37.929 }, 00:16:37.929 { 00:16:37.929 "name": null, 00:16:37.929 "uuid": "12782367-f2aa-5b13-909e-3db35fc11385", 00:16:37.929 "is_configured": false, 00:16:37.929 "data_offset": 2048, 00:16:37.929 "data_size": 63488 00:16:37.929 }, 00:16:37.929 { 00:16:37.929 "name": null, 00:16:37.929 "uuid": "ad7f5c6f-28ec-5da2-bdaf-41226ccb9af5", 00:16:37.929 "is_configured": false, 00:16:37.929 "data_offset": 2048, 00:16:37.929 "data_size": 63488 00:16:37.929 } 00:16:37.929 ] 00:16:37.929 }' 00:16:37.929 16:55:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:37.929 16:55:26 -- common/autotest_common.sh@10 -- # set +x 00:16:38.497 16:55:27 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:38.497 16:55:27 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:38.755 [2024-11-05 16:55:27.427935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:38.755 [2024-11-05 16:55:27.428206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.755 [2024-11-05 16:55:27.428405] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:38.755 [2024-11-05 16:55:27.428526] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.755 [2024-11-05 16:55:27.429104] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.755 [2024-11-05 16:55:27.429294] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:38.755 [2024-11-05 16:55:27.429589] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:38.755 [2024-11-05 16:55:27.429725] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:38.755 pt2 00:16:38.755 16:55:27 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:39.013 [2024-11-05 16:55:27.679983] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:39.013 16:55:27 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:39.013 16:55:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:39.013 16:55:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:39.013 16:55:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:39.013 16:55:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:39.013 16:55:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:39.013 16:55:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:39.013 16:55:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:39.013 16:55:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:39.013 16:55:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:39.013 16:55:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.013 16:55:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.271 16:55:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:39.271 "name": "raid_bdev1", 00:16:39.271 "uuid": "2c0c85b6-8f35-446a-beb6-f0d1930b6880", 00:16:39.271 "strip_size_kb": 64, 00:16:39.271 "state": "configuring", 00:16:39.271 "raid_level": "raid0", 00:16:39.271 "superblock": true, 00:16:39.271 "num_base_bdevs": 3, 00:16:39.271 "num_base_bdevs_discovered": 1, 00:16:39.271 "num_base_bdevs_operational": 3, 00:16:39.271 "base_bdevs_list": [ 00:16:39.271 { 00:16:39.271 "name": "pt1", 00:16:39.271 "uuid": "c0f9e642-5170-5618-a908-902ee245d421", 00:16:39.271 "is_configured": true, 00:16:39.271 "data_offset": 2048, 00:16:39.271 "data_size": 63488 00:16:39.271 }, 00:16:39.271 { 00:16:39.271 "name": null, 00:16:39.271 "uuid": "12782367-f2aa-5b13-909e-3db35fc11385", 00:16:39.271 "is_configured": false, 00:16:39.271 "data_offset": 2048, 00:16:39.271 "data_size": 63488 00:16:39.271 }, 00:16:39.271 { 00:16:39.271 "name": null, 00:16:39.271 "uuid": "ad7f5c6f-28ec-5da2-bdaf-41226ccb9af5", 00:16:39.271 "is_configured": false, 00:16:39.271 "data_offset": 2048, 00:16:39.271 "data_size": 63488 00:16:39.271 } 00:16:39.271 ] 00:16:39.271 }' 00:16:39.271 16:55:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:39.271 16:55:27 -- common/autotest_common.sh@10 -- # set +x 00:16:39.838 16:55:28 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:39.838 16:55:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:39.838 16:55:28 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:40.096 [2024-11-05 16:55:28.780251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:40.096 [2024-11-05 16:55:28.780547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.097 [2024-11-05 16:55:28.780698] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:40.097 [2024-11-05 16:55:28.780835] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.097 [2024-11-05 16:55:28.781420] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.097 [2024-11-05 16:55:28.781620] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:40.097 [2024-11-05 16:55:28.781888] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:40.097 [2024-11-05 16:55:28.782024] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.097 pt2 00:16:40.097 16:55:28 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:40.097 16:55:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:40.097 16:55:28 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:40.355 [2024-11-05 16:55:29.024289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:40.355 [2024-11-05 16:55:29.024508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.355 [2024-11-05 16:55:29.024657] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:40.355 [2024-11-05 16:55:29.024793] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.355 [2024-11-05 16:55:29.025359] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.355 [2024-11-05 16:55:29.025565] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:40.355 [2024-11-05 16:55:29.025799] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:40.355 [2024-11-05 16:55:29.025927] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:40.355 [2024-11-05 16:55:29.026142] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:16:40.355 [2024-11-05 16:55:29.026257] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:40.355 [2024-11-05 16:55:29.026469] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:40.355 [2024-11-05 16:55:29.026945] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:16:40.355 [2024-11-05 16:55:29.027080] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:16:40.355 [2024-11-05 16:55:29.027336] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.355 pt3 00:16:40.355 16:55:29 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:40.355 16:55:29 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:40.355 16:55:29 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:40.355 16:55:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:40.355 16:55:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:40.355 16:55:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:40.355 16:55:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:40.355 16:55:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:40.355 16:55:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:40.355 16:55:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:40.355 16:55:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:40.355 16:55:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:40.355 16:55:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.355 16:55:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.614 16:55:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:40.614 "name": "raid_bdev1", 00:16:40.614 "uuid": "2c0c85b6-8f35-446a-beb6-f0d1930b6880", 00:16:40.614 "strip_size_kb": 64, 00:16:40.614 "state": "online", 00:16:40.614 "raid_level": "raid0", 00:16:40.614 "superblock": true, 00:16:40.614 "num_base_bdevs": 3, 00:16:40.614 "num_base_bdevs_discovered": 3, 00:16:40.614 "num_base_bdevs_operational": 3, 00:16:40.614 "base_bdevs_list": [ 00:16:40.614 { 00:16:40.614 "name": "pt1", 00:16:40.614 "uuid": "c0f9e642-5170-5618-a908-902ee245d421", 00:16:40.614 "is_configured": true, 00:16:40.614 "data_offset": 2048, 00:16:40.614 "data_size": 63488 00:16:40.614 }, 00:16:40.614 { 00:16:40.614 "name": "pt2", 00:16:40.614 "uuid": "12782367-f2aa-5b13-909e-3db35fc11385", 00:16:40.614 "is_configured": true, 00:16:40.614 "data_offset": 2048, 00:16:40.614 "data_size": 63488 00:16:40.614 }, 00:16:40.614 { 00:16:40.614 "name": "pt3", 00:16:40.614 "uuid": "ad7f5c6f-28ec-5da2-bdaf-41226ccb9af5", 00:16:40.614 "is_configured": true, 00:16:40.614 "data_offset": 2048, 00:16:40.614 "data_size": 63488 00:16:40.614 } 00:16:40.614 ] 00:16:40.614 }' 00:16:40.614 16:55:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:40.614 16:55:29 -- common/autotest_common.sh@10 -- # set +x 00:16:41.181 16:55:29 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:41.181 16:55:29 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:41.440 [2024-11-05 16:55:30.104786] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.440 16:55:30 -- bdev/bdev_raid.sh@430 -- # '[' 2c0c85b6-8f35-446a-beb6-f0d1930b6880 '!=' 2c0c85b6-8f35-446a-beb6-f0d1930b6880 ']' 00:16:41.440 16:55:30 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:16:41.440 16:55:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:41.440 16:55:30 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:41.440 16:55:30 -- bdev/bdev_raid.sh@511 -- # killprocess 115640 00:16:41.440 16:55:30 -- common/autotest_common.sh@936 -- # '[' -z 115640 ']' 00:16:41.440 16:55:30 -- common/autotest_common.sh@940 -- # kill -0 115640 00:16:41.440 16:55:30 -- common/autotest_common.sh@941 -- # uname 00:16:41.440 16:55:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:41.440 16:55:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115640 00:16:41.440 killing process with pid 115640 00:16:41.440 16:55:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:41.440 16:55:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:41.440 16:55:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115640' 00:16:41.440 16:55:30 -- common/autotest_common.sh@955 -- # kill 115640 00:16:41.440 [2024-11-05 16:55:30.149393] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:41.440 16:55:30 -- common/autotest_common.sh@960 -- # wait 115640 00:16:41.440 [2024-11-05 16:55:30.149534] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.440 [2024-11-05 16:55:30.149592] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.440 [2024-11-05 16:55:30.149617] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:16:41.698 [2024-11-05 16:55:30.352313] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:42.632 ************************************ 00:16:42.632 END TEST raid_superblock_test 00:16:42.633 ************************************ 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:42.633 00:16:42.633 real 0m10.694s 00:16:42.633 user 0m18.562s 00:16:42.633 sys 0m1.288s 00:16:42.633 16:55:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:42.633 16:55:31 -- common/autotest_common.sh@10 -- # set +x 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:16:42.633 16:55:31 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:42.633 16:55:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:42.633 16:55:31 -- common/autotest_common.sh@10 -- # set +x 00:16:42.633 ************************************ 00:16:42.633 START TEST raid_state_function_test 00:16:42.633 ************************************ 00:16:42.633 16:55:31 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 false 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@226 -- # raid_pid=115953 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:42.633 Process raid pid: 115953 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115953' 00:16:42.633 16:55:31 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115953 /var/tmp/spdk-raid.sock 00:16:42.633 16:55:31 -- common/autotest_common.sh@829 -- # '[' -z 115953 ']' 00:16:42.633 16:55:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:42.633 16:55:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:42.633 16:55:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:42.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:42.633 16:55:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:42.633 16:55:31 -- common/autotest_common.sh@10 -- # set +x 00:16:42.633 [2024-11-05 16:55:31.472018] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:42.633 [2024-11-05 16:55:31.472503] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.891 [2024-11-05 16:55:31.642894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.149 [2024-11-05 16:55:31.840141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.149 [2024-11-05 16:55:32.027387] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.715 16:55:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:43.715 16:55:32 -- common/autotest_common.sh@862 -- # return 0 00:16:43.715 16:55:32 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:43.973 [2024-11-05 16:55:32.632048] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:43.973 [2024-11-05 16:55:32.632295] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:43.973 [2024-11-05 16:55:32.632431] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:43.973 [2024-11-05 16:55:32.632494] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:43.973 [2024-11-05 16:55:32.632595] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:43.973 [2024-11-05 16:55:32.632679] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:43.973 16:55:32 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:43.973 16:55:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:43.973 16:55:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:43.973 16:55:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:43.973 16:55:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:43.973 16:55:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:43.973 16:55:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:43.973 16:55:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:43.973 16:55:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:43.973 16:55:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:43.973 16:55:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.973 16:55:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.973 16:55:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:43.973 "name": "Existed_Raid", 00:16:43.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.973 "strip_size_kb": 64, 00:16:43.973 "state": "configuring", 00:16:43.973 "raid_level": "concat", 00:16:43.973 "superblock": false, 00:16:43.973 "num_base_bdevs": 3, 00:16:43.973 "num_base_bdevs_discovered": 0, 00:16:43.973 "num_base_bdevs_operational": 3, 00:16:43.973 "base_bdevs_list": [ 00:16:43.973 { 00:16:43.973 "name": "BaseBdev1", 00:16:43.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.973 "is_configured": false, 00:16:43.973 "data_offset": 0, 00:16:43.973 "data_size": 0 00:16:43.973 }, 00:16:43.973 { 00:16:43.973 "name": "BaseBdev2", 00:16:43.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.973 "is_configured": false, 00:16:43.973 "data_offset": 0, 00:16:43.973 "data_size": 0 00:16:43.973 }, 00:16:43.973 { 00:16:43.973 "name": "BaseBdev3", 00:16:43.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.973 "is_configured": false, 00:16:43.973 "data_offset": 0, 00:16:43.973 "data_size": 0 00:16:43.973 } 00:16:43.973 ] 00:16:43.973 }' 00:16:43.973 16:55:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:43.973 16:55:32 -- common/autotest_common.sh@10 -- # set +x 00:16:44.908 16:55:33 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:44.908 [2024-11-05 16:55:33.708294] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:44.908 [2024-11-05 16:55:33.708657] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:44.908 16:55:33 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:45.166 [2024-11-05 16:55:33.968375] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:45.166 [2024-11-05 16:55:33.968722] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:45.166 [2024-11-05 16:55:33.968835] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:45.166 [2024-11-05 16:55:33.968906] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:45.166 [2024-11-05 16:55:33.969039] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:45.166 [2024-11-05 16:55:33.969105] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:45.166 16:55:33 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:45.424 [2024-11-05 16:55:34.250727] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.424 BaseBdev1 00:16:45.424 16:55:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:45.424 16:55:34 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:45.424 16:55:34 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:45.424 16:55:34 -- common/autotest_common.sh@899 -- # local i 00:16:45.424 16:55:34 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:45.424 16:55:34 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:45.424 16:55:34 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:45.682 16:55:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:45.940 [ 00:16:45.940 { 00:16:45.940 "name": "BaseBdev1", 00:16:45.940 "aliases": [ 00:16:45.940 "faca1097-8865-4922-9634-2600ec7cfcee" 00:16:45.940 ], 00:16:45.940 "product_name": "Malloc disk", 00:16:45.940 "block_size": 512, 00:16:45.940 "num_blocks": 65536, 00:16:45.940 "uuid": "faca1097-8865-4922-9634-2600ec7cfcee", 00:16:45.940 "assigned_rate_limits": { 00:16:45.940 "rw_ios_per_sec": 0, 00:16:45.940 "rw_mbytes_per_sec": 0, 00:16:45.940 "r_mbytes_per_sec": 0, 00:16:45.940 "w_mbytes_per_sec": 0 00:16:45.940 }, 00:16:45.940 "claimed": true, 00:16:45.940 "claim_type": "exclusive_write", 00:16:45.940 "zoned": false, 00:16:45.940 "supported_io_types": { 00:16:45.940 "read": true, 00:16:45.940 "write": true, 00:16:45.940 "unmap": true, 00:16:45.940 "write_zeroes": true, 00:16:45.940 "flush": true, 00:16:45.940 "reset": true, 00:16:45.940 "compare": false, 00:16:45.940 "compare_and_write": false, 00:16:45.940 "abort": true, 00:16:45.940 "nvme_admin": false, 00:16:45.940 "nvme_io": false 00:16:45.940 }, 00:16:45.940 "memory_domains": [ 00:16:45.940 { 00:16:45.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.940 "dma_device_type": 2 00:16:45.940 } 00:16:45.940 ], 00:16:45.940 "driver_specific": {} 00:16:45.940 } 00:16:45.940 ] 00:16:45.940 16:55:34 -- common/autotest_common.sh@905 -- # return 0 00:16:45.940 16:55:34 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:45.940 16:55:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:45.940 16:55:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:45.940 16:55:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:45.940 16:55:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:45.940 16:55:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:45.940 16:55:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:45.940 16:55:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:45.940 16:55:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:45.940 16:55:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:45.940 16:55:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.940 16:55:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.198 16:55:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:46.198 "name": "Existed_Raid", 00:16:46.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.198 "strip_size_kb": 64, 00:16:46.198 "state": "configuring", 00:16:46.198 "raid_level": "concat", 00:16:46.198 "superblock": false, 00:16:46.198 "num_base_bdevs": 3, 00:16:46.198 "num_base_bdevs_discovered": 1, 00:16:46.198 "num_base_bdevs_operational": 3, 00:16:46.198 "base_bdevs_list": [ 00:16:46.198 { 00:16:46.198 "name": "BaseBdev1", 00:16:46.198 "uuid": "faca1097-8865-4922-9634-2600ec7cfcee", 00:16:46.198 "is_configured": true, 00:16:46.198 "data_offset": 0, 00:16:46.198 "data_size": 65536 00:16:46.198 }, 00:16:46.198 { 00:16:46.198 "name": "BaseBdev2", 00:16:46.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.198 "is_configured": false, 00:16:46.198 "data_offset": 0, 00:16:46.198 "data_size": 0 00:16:46.198 }, 00:16:46.198 { 00:16:46.198 "name": "BaseBdev3", 00:16:46.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.198 "is_configured": false, 00:16:46.198 "data_offset": 0, 00:16:46.198 "data_size": 0 00:16:46.198 } 00:16:46.198 ] 00:16:46.198 }' 00:16:46.198 16:55:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:46.198 16:55:34 -- common/autotest_common.sh@10 -- # set +x 00:16:46.763 16:55:35 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:47.022 [2024-11-05 16:55:35.683172] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.022 [2024-11-05 16:55:35.683450] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:47.022 16:55:35 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:47.022 16:55:35 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:47.022 [2024-11-05 16:55:35.875242] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.022 [2024-11-05 16:55:35.877228] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:47.022 [2024-11-05 16:55:35.877451] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.022 [2024-11-05 16:55:35.877590] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:47.022 [2024-11-05 16:55:35.877670] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:47.022 16:55:35 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:47.022 16:55:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:47.022 16:55:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:47.022 16:55:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:47.022 16:55:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:47.022 16:55:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:47.022 16:55:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:47.022 16:55:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:47.022 16:55:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:47.022 16:55:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:47.022 16:55:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:47.022 16:55:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:47.022 16:55:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.022 16:55:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.280 16:55:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.280 "name": "Existed_Raid", 00:16:47.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.280 "strip_size_kb": 64, 00:16:47.280 "state": "configuring", 00:16:47.280 "raid_level": "concat", 00:16:47.280 "superblock": false, 00:16:47.280 "num_base_bdevs": 3, 00:16:47.280 "num_base_bdevs_discovered": 1, 00:16:47.280 "num_base_bdevs_operational": 3, 00:16:47.280 "base_bdevs_list": [ 00:16:47.281 { 00:16:47.281 "name": "BaseBdev1", 00:16:47.281 "uuid": "faca1097-8865-4922-9634-2600ec7cfcee", 00:16:47.281 "is_configured": true, 00:16:47.281 "data_offset": 0, 00:16:47.281 "data_size": 65536 00:16:47.281 }, 00:16:47.281 { 00:16:47.281 "name": "BaseBdev2", 00:16:47.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.281 "is_configured": false, 00:16:47.281 "data_offset": 0, 00:16:47.281 "data_size": 0 00:16:47.281 }, 00:16:47.281 { 00:16:47.281 "name": "BaseBdev3", 00:16:47.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.281 "is_configured": false, 00:16:47.281 "data_offset": 0, 00:16:47.281 "data_size": 0 00:16:47.281 } 00:16:47.281 ] 00:16:47.281 }' 00:16:47.281 16:55:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.281 16:55:36 -- common/autotest_common.sh@10 -- # set +x 00:16:47.847 16:55:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:48.414 [2024-11-05 16:55:37.047464] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:48.414 BaseBdev2 00:16:48.414 16:55:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:48.414 16:55:37 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:48.414 16:55:37 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:48.414 16:55:37 -- common/autotest_common.sh@899 -- # local i 00:16:48.414 16:55:37 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:48.414 16:55:37 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:48.414 16:55:37 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:48.673 16:55:37 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:48.931 [ 00:16:48.931 { 00:16:48.931 "name": "BaseBdev2", 00:16:48.931 "aliases": [ 00:16:48.931 "4bad88bd-9267-4c8b-aa0b-20d2b565d80c" 00:16:48.931 ], 00:16:48.931 "product_name": "Malloc disk", 00:16:48.931 "block_size": 512, 00:16:48.931 "num_blocks": 65536, 00:16:48.931 "uuid": "4bad88bd-9267-4c8b-aa0b-20d2b565d80c", 00:16:48.931 "assigned_rate_limits": { 00:16:48.931 "rw_ios_per_sec": 0, 00:16:48.931 "rw_mbytes_per_sec": 0, 00:16:48.931 "r_mbytes_per_sec": 0, 00:16:48.931 "w_mbytes_per_sec": 0 00:16:48.931 }, 00:16:48.931 "claimed": true, 00:16:48.931 "claim_type": "exclusive_write", 00:16:48.931 "zoned": false, 00:16:48.931 "supported_io_types": { 00:16:48.931 "read": true, 00:16:48.931 "write": true, 00:16:48.931 "unmap": true, 00:16:48.931 "write_zeroes": true, 00:16:48.931 "flush": true, 00:16:48.931 "reset": true, 00:16:48.931 "compare": false, 00:16:48.931 "compare_and_write": false, 00:16:48.931 "abort": true, 00:16:48.931 "nvme_admin": false, 00:16:48.931 "nvme_io": false 00:16:48.931 }, 00:16:48.931 "memory_domains": [ 00:16:48.931 { 00:16:48.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.931 "dma_device_type": 2 00:16:48.931 } 00:16:48.931 ], 00:16:48.931 "driver_specific": {} 00:16:48.931 } 00:16:48.931 ] 00:16:48.931 16:55:37 -- common/autotest_common.sh@905 -- # return 0 00:16:48.931 16:55:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:48.931 16:55:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:48.931 16:55:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:48.931 16:55:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:48.931 16:55:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:48.931 16:55:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:48.931 16:55:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:48.931 16:55:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:48.931 16:55:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:48.931 16:55:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:48.931 16:55:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:48.931 16:55:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:48.931 16:55:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.931 16:55:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.189 16:55:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:49.190 "name": "Existed_Raid", 00:16:49.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.190 "strip_size_kb": 64, 00:16:49.190 "state": "configuring", 00:16:49.190 "raid_level": "concat", 00:16:49.190 "superblock": false, 00:16:49.190 "num_base_bdevs": 3, 00:16:49.190 "num_base_bdevs_discovered": 2, 00:16:49.190 "num_base_bdevs_operational": 3, 00:16:49.190 "base_bdevs_list": [ 00:16:49.190 { 00:16:49.190 "name": "BaseBdev1", 00:16:49.190 "uuid": "faca1097-8865-4922-9634-2600ec7cfcee", 00:16:49.190 "is_configured": true, 00:16:49.190 "data_offset": 0, 00:16:49.190 "data_size": 65536 00:16:49.190 }, 00:16:49.190 { 00:16:49.190 "name": "BaseBdev2", 00:16:49.190 "uuid": "4bad88bd-9267-4c8b-aa0b-20d2b565d80c", 00:16:49.190 "is_configured": true, 00:16:49.190 "data_offset": 0, 00:16:49.190 "data_size": 65536 00:16:49.190 }, 00:16:49.190 { 00:16:49.190 "name": "BaseBdev3", 00:16:49.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.190 "is_configured": false, 00:16:49.190 "data_offset": 0, 00:16:49.190 "data_size": 0 00:16:49.190 } 00:16:49.190 ] 00:16:49.190 }' 00:16:49.190 16:55:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:49.190 16:55:37 -- common/autotest_common.sh@10 -- # set +x 00:16:49.756 16:55:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:50.015 [2024-11-05 16:55:38.738561] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:50.015 [2024-11-05 16:55:38.738772] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:16:50.015 [2024-11-05 16:55:38.738817] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:50.015 [2024-11-05 16:55:38.739086] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:50.015 [2024-11-05 16:55:38.739499] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:16:50.015 [2024-11-05 16:55:38.739674] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:16:50.015 [2024-11-05 16:55:38.740049] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.015 BaseBdev3 00:16:50.015 16:55:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:50.015 16:55:38 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:50.015 16:55:38 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:50.015 16:55:38 -- common/autotest_common.sh@899 -- # local i 00:16:50.015 16:55:38 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:50.015 16:55:38 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:50.015 16:55:38 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:50.273 16:55:39 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:50.532 [ 00:16:50.532 { 00:16:50.532 "name": "BaseBdev3", 00:16:50.532 "aliases": [ 00:16:50.532 "b48b1d08-8de4-4b2e-a2a4-5c4f42f97aa5" 00:16:50.532 ], 00:16:50.532 "product_name": "Malloc disk", 00:16:50.532 "block_size": 512, 00:16:50.532 "num_blocks": 65536, 00:16:50.532 "uuid": "b48b1d08-8de4-4b2e-a2a4-5c4f42f97aa5", 00:16:50.532 "assigned_rate_limits": { 00:16:50.532 "rw_ios_per_sec": 0, 00:16:50.532 "rw_mbytes_per_sec": 0, 00:16:50.532 "r_mbytes_per_sec": 0, 00:16:50.532 "w_mbytes_per_sec": 0 00:16:50.532 }, 00:16:50.532 "claimed": true, 00:16:50.532 "claim_type": "exclusive_write", 00:16:50.532 "zoned": false, 00:16:50.532 "supported_io_types": { 00:16:50.532 "read": true, 00:16:50.532 "write": true, 00:16:50.532 "unmap": true, 00:16:50.532 "write_zeroes": true, 00:16:50.532 "flush": true, 00:16:50.532 "reset": true, 00:16:50.532 "compare": false, 00:16:50.532 "compare_and_write": false, 00:16:50.532 "abort": true, 00:16:50.532 "nvme_admin": false, 00:16:50.532 "nvme_io": false 00:16:50.532 }, 00:16:50.532 "memory_domains": [ 00:16:50.532 { 00:16:50.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.532 "dma_device_type": 2 00:16:50.532 } 00:16:50.532 ], 00:16:50.532 "driver_specific": {} 00:16:50.532 } 00:16:50.532 ] 00:16:50.532 16:55:39 -- common/autotest_common.sh@905 -- # return 0 00:16:50.532 16:55:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:50.532 16:55:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:50.532 16:55:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:50.532 16:55:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:50.532 16:55:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:50.532 16:55:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:50.532 16:55:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:50.532 16:55:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:50.532 16:55:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:50.532 16:55:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:50.532 16:55:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:50.532 16:55:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:50.532 16:55:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.532 16:55:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.791 16:55:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:50.791 "name": "Existed_Raid", 00:16:50.791 "uuid": "aaca4d19-24ee-423d-8122-1ce8e4066182", 00:16:50.791 "strip_size_kb": 64, 00:16:50.791 "state": "online", 00:16:50.791 "raid_level": "concat", 00:16:50.791 "superblock": false, 00:16:50.791 "num_base_bdevs": 3, 00:16:50.791 "num_base_bdevs_discovered": 3, 00:16:50.791 "num_base_bdevs_operational": 3, 00:16:50.791 "base_bdevs_list": [ 00:16:50.791 { 00:16:50.791 "name": "BaseBdev1", 00:16:50.791 "uuid": "faca1097-8865-4922-9634-2600ec7cfcee", 00:16:50.791 "is_configured": true, 00:16:50.791 "data_offset": 0, 00:16:50.791 "data_size": 65536 00:16:50.791 }, 00:16:50.791 { 00:16:50.791 "name": "BaseBdev2", 00:16:50.791 "uuid": "4bad88bd-9267-4c8b-aa0b-20d2b565d80c", 00:16:50.791 "is_configured": true, 00:16:50.791 "data_offset": 0, 00:16:50.791 "data_size": 65536 00:16:50.791 }, 00:16:50.791 { 00:16:50.791 "name": "BaseBdev3", 00:16:50.791 "uuid": "b48b1d08-8de4-4b2e-a2a4-5c4f42f97aa5", 00:16:50.791 "is_configured": true, 00:16:50.791 "data_offset": 0, 00:16:50.791 "data_size": 65536 00:16:50.791 } 00:16:50.791 ] 00:16:50.791 }' 00:16:50.791 16:55:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:50.791 16:55:39 -- common/autotest_common.sh@10 -- # set +x 00:16:51.388 16:55:40 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:51.668 [2024-11-05 16:55:40.455797] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:51.668 [2024-11-05 16:55:40.456046] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.668 [2024-11-05 16:55:40.456336] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.668 16:55:40 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:51.668 16:55:40 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:51.668 16:55:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:51.668 16:55:40 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:51.668 16:55:40 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:51.668 16:55:40 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:51.668 16:55:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:51.668 16:55:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:51.668 16:55:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:51.668 16:55:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:51.668 16:55:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:51.668 16:55:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:51.668 16:55:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:51.668 16:55:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:51.668 16:55:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:51.668 16:55:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.668 16:55:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.235 16:55:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:52.235 "name": "Existed_Raid", 00:16:52.235 "uuid": "aaca4d19-24ee-423d-8122-1ce8e4066182", 00:16:52.235 "strip_size_kb": 64, 00:16:52.235 "state": "offline", 00:16:52.235 "raid_level": "concat", 00:16:52.235 "superblock": false, 00:16:52.235 "num_base_bdevs": 3, 00:16:52.235 "num_base_bdevs_discovered": 2, 00:16:52.235 "num_base_bdevs_operational": 2, 00:16:52.235 "base_bdevs_list": [ 00:16:52.235 { 00:16:52.235 "name": null, 00:16:52.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.236 "is_configured": false, 00:16:52.236 "data_offset": 0, 00:16:52.236 "data_size": 65536 00:16:52.236 }, 00:16:52.236 { 00:16:52.236 "name": "BaseBdev2", 00:16:52.236 "uuid": "4bad88bd-9267-4c8b-aa0b-20d2b565d80c", 00:16:52.236 "is_configured": true, 00:16:52.236 "data_offset": 0, 00:16:52.236 "data_size": 65536 00:16:52.236 }, 00:16:52.236 { 00:16:52.236 "name": "BaseBdev3", 00:16:52.236 "uuid": "b48b1d08-8de4-4b2e-a2a4-5c4f42f97aa5", 00:16:52.236 "is_configured": true, 00:16:52.236 "data_offset": 0, 00:16:52.236 "data_size": 65536 00:16:52.236 } 00:16:52.236 ] 00:16:52.236 }' 00:16:52.236 16:55:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:52.236 16:55:40 -- common/autotest_common.sh@10 -- # set +x 00:16:52.801 16:55:41 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:52.801 16:55:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:52.801 16:55:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:52.801 16:55:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.059 16:55:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:53.059 16:55:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:53.059 16:55:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:53.317 [2024-11-05 16:55:42.019807] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:53.317 16:55:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:53.317 16:55:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:53.317 16:55:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.317 16:55:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:53.574 16:55:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:53.574 16:55:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:53.575 16:55:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:53.832 [2024-11-05 16:55:42.654422] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:53.832 [2024-11-05 16:55:42.654701] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:16:54.091 16:55:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:54.091 16:55:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:54.091 16:55:42 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.091 16:55:42 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:54.349 16:55:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:54.349 16:55:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:54.349 16:55:42 -- bdev/bdev_raid.sh@287 -- # killprocess 115953 00:16:54.349 16:55:42 -- common/autotest_common.sh@936 -- # '[' -z 115953 ']' 00:16:54.349 16:55:42 -- common/autotest_common.sh@940 -- # kill -0 115953 00:16:54.349 16:55:42 -- common/autotest_common.sh@941 -- # uname 00:16:54.349 16:55:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:54.349 16:55:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115953 00:16:54.349 16:55:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:54.349 16:55:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:54.349 16:55:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115953' 00:16:54.349 killing process with pid 115953 00:16:54.349 16:55:43 -- common/autotest_common.sh@955 -- # kill 115953 00:16:54.349 [2024-11-05 16:55:43.030181] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.349 16:55:43 -- common/autotest_common.sh@960 -- # wait 115953 00:16:54.349 [2024-11-05 16:55:43.030471] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:55.283 16:55:43 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:55.283 00:16:55.283 real 0m12.582s 00:16:55.283 user 0m22.284s 00:16:55.283 sys 0m1.489s 00:16:55.283 16:55:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:55.283 16:55:43 -- common/autotest_common.sh@10 -- # set +x 00:16:55.283 ************************************ 00:16:55.283 END TEST raid_state_function_test 00:16:55.283 ************************************ 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:16:55.283 16:55:44 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:55.283 16:55:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:55.283 16:55:44 -- common/autotest_common.sh@10 -- # set +x 00:16:55.283 ************************************ 00:16:55.283 START TEST raid_state_function_test_sb 00:16:55.283 ************************************ 00:16:55.283 16:55:44 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 true 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:55.283 16:55:44 -- bdev/bdev_raid.sh@226 -- # raid_pid=116335 00:16:55.284 16:55:44 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116335' 00:16:55.284 16:55:44 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:55.284 Process raid pid: 116335 00:16:55.284 16:55:44 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116335 /var/tmp/spdk-raid.sock 00:16:55.284 16:55:44 -- common/autotest_common.sh@829 -- # '[' -z 116335 ']' 00:16:55.284 16:55:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:55.284 16:55:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.284 16:55:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:55.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:55.284 16:55:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.284 16:55:44 -- common/autotest_common.sh@10 -- # set +x 00:16:55.284 [2024-11-05 16:55:44.105243] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:55.284 [2024-11-05 16:55:44.105654] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.542 [2024-11-05 16:55:44.278587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.800 [2024-11-05 16:55:44.510994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.800 [2024-11-05 16:55:44.684528] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.366 16:55:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.366 16:55:45 -- common/autotest_common.sh@862 -- # return 0 00:16:56.366 16:55:45 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:56.624 [2024-11-05 16:55:45.298850] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:56.624 [2024-11-05 16:55:45.299139] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:56.624 [2024-11-05 16:55:45.299250] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.624 [2024-11-05 16:55:45.299311] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.624 [2024-11-05 16:55:45.299404] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:56.624 [2024-11-05 16:55:45.299485] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:56.624 16:55:45 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:56.624 16:55:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:56.624 16:55:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:56.624 16:55:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:56.624 16:55:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:56.624 16:55:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:56.624 16:55:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:56.624 16:55:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:56.624 16:55:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:56.624 16:55:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:56.624 16:55:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.625 16:55:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.883 16:55:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:56.883 "name": "Existed_Raid", 00:16:56.883 "uuid": "94f92f41-7738-4f9f-aa8d-fca9b5100788", 00:16:56.883 "strip_size_kb": 64, 00:16:56.883 "state": "configuring", 00:16:56.883 "raid_level": "concat", 00:16:56.883 "superblock": true, 00:16:56.883 "num_base_bdevs": 3, 00:16:56.883 "num_base_bdevs_discovered": 0, 00:16:56.883 "num_base_bdevs_operational": 3, 00:16:56.883 "base_bdevs_list": [ 00:16:56.883 { 00:16:56.883 "name": "BaseBdev1", 00:16:56.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.883 "is_configured": false, 00:16:56.883 "data_offset": 0, 00:16:56.883 "data_size": 0 00:16:56.883 }, 00:16:56.883 { 00:16:56.883 "name": "BaseBdev2", 00:16:56.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.883 "is_configured": false, 00:16:56.883 "data_offset": 0, 00:16:56.883 "data_size": 0 00:16:56.883 }, 00:16:56.883 { 00:16:56.883 "name": "BaseBdev3", 00:16:56.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.883 "is_configured": false, 00:16:56.883 "data_offset": 0, 00:16:56.883 "data_size": 0 00:16:56.883 } 00:16:56.883 ] 00:16:56.883 }' 00:16:56.883 16:55:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:56.883 16:55:45 -- common/autotest_common.sh@10 -- # set +x 00:16:57.449 16:55:46 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:57.707 [2024-11-05 16:55:46.359000] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:57.707 [2024-11-05 16:55:46.359285] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:57.707 16:55:46 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:57.966 [2024-11-05 16:55:46.615117] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:57.966 [2024-11-05 16:55:46.615409] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:57.966 [2024-11-05 16:55:46.615509] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:57.966 [2024-11-05 16:55:46.615579] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:57.966 [2024-11-05 16:55:46.615672] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:57.966 [2024-11-05 16:55:46.615735] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:57.966 16:55:46 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:58.223 [2024-11-05 16:55:46.872993] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.223 BaseBdev1 00:16:58.223 16:55:46 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:58.223 16:55:46 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:58.223 16:55:46 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:58.223 16:55:46 -- common/autotest_common.sh@899 -- # local i 00:16:58.223 16:55:46 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:58.223 16:55:46 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:58.223 16:55:46 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:58.481 16:55:47 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:58.481 [ 00:16:58.481 { 00:16:58.481 "name": "BaseBdev1", 00:16:58.481 "aliases": [ 00:16:58.481 "46357f4b-1450-491b-955b-77ede835c592" 00:16:58.481 ], 00:16:58.481 "product_name": "Malloc disk", 00:16:58.481 "block_size": 512, 00:16:58.481 "num_blocks": 65536, 00:16:58.481 "uuid": "46357f4b-1450-491b-955b-77ede835c592", 00:16:58.481 "assigned_rate_limits": { 00:16:58.481 "rw_ios_per_sec": 0, 00:16:58.481 "rw_mbytes_per_sec": 0, 00:16:58.481 "r_mbytes_per_sec": 0, 00:16:58.481 "w_mbytes_per_sec": 0 00:16:58.481 }, 00:16:58.481 "claimed": true, 00:16:58.481 "claim_type": "exclusive_write", 00:16:58.481 "zoned": false, 00:16:58.481 "supported_io_types": { 00:16:58.481 "read": true, 00:16:58.481 "write": true, 00:16:58.481 "unmap": true, 00:16:58.481 "write_zeroes": true, 00:16:58.481 "flush": true, 00:16:58.481 "reset": true, 00:16:58.481 "compare": false, 00:16:58.481 "compare_and_write": false, 00:16:58.481 "abort": true, 00:16:58.481 "nvme_admin": false, 00:16:58.481 "nvme_io": false 00:16:58.481 }, 00:16:58.481 "memory_domains": [ 00:16:58.481 { 00:16:58.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.481 "dma_device_type": 2 00:16:58.481 } 00:16:58.481 ], 00:16:58.481 "driver_specific": {} 00:16:58.481 } 00:16:58.481 ] 00:16:58.481 16:55:47 -- common/autotest_common.sh@905 -- # return 0 00:16:58.481 16:55:47 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:58.481 16:55:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:58.481 16:55:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:58.481 16:55:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:58.481 16:55:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:58.482 16:55:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:58.482 16:55:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.482 16:55:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.482 16:55:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.482 16:55:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.482 16:55:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.482 16:55:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.740 16:55:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.740 "name": "Existed_Raid", 00:16:58.740 "uuid": "998bc683-b19b-496d-9dd4-03fe952f1754", 00:16:58.740 "strip_size_kb": 64, 00:16:58.740 "state": "configuring", 00:16:58.740 "raid_level": "concat", 00:16:58.740 "superblock": true, 00:16:58.740 "num_base_bdevs": 3, 00:16:58.740 "num_base_bdevs_discovered": 1, 00:16:58.740 "num_base_bdevs_operational": 3, 00:16:58.740 "base_bdevs_list": [ 00:16:58.740 { 00:16:58.740 "name": "BaseBdev1", 00:16:58.740 "uuid": "46357f4b-1450-491b-955b-77ede835c592", 00:16:58.740 "is_configured": true, 00:16:58.740 "data_offset": 2048, 00:16:58.740 "data_size": 63488 00:16:58.740 }, 00:16:58.740 { 00:16:58.740 "name": "BaseBdev2", 00:16:58.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.740 "is_configured": false, 00:16:58.740 "data_offset": 0, 00:16:58.740 "data_size": 0 00:16:58.740 }, 00:16:58.740 { 00:16:58.740 "name": "BaseBdev3", 00:16:58.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.740 "is_configured": false, 00:16:58.740 "data_offset": 0, 00:16:58.740 "data_size": 0 00:16:58.740 } 00:16:58.740 ] 00:16:58.740 }' 00:16:58.740 16:55:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.740 16:55:47 -- common/autotest_common.sh@10 -- # set +x 00:16:59.677 16:55:48 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:59.677 [2024-11-05 16:55:48.485427] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:59.677 [2024-11-05 16:55:48.485807] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:59.677 16:55:48 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:59.677 16:55:48 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:59.935 16:55:48 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:00.500 BaseBdev1 00:17:00.500 16:55:49 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:00.500 16:55:49 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:00.500 16:55:49 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:00.500 16:55:49 -- common/autotest_common.sh@899 -- # local i 00:17:00.500 16:55:49 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:00.500 16:55:49 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:00.500 16:55:49 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:00.500 16:55:49 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:01.065 [ 00:17:01.065 { 00:17:01.065 "name": "BaseBdev1", 00:17:01.065 "aliases": [ 00:17:01.065 "4636fc7f-351b-4c19-8629-6e1db8e522db" 00:17:01.065 ], 00:17:01.065 "product_name": "Malloc disk", 00:17:01.066 "block_size": 512, 00:17:01.066 "num_blocks": 65536, 00:17:01.066 "uuid": "4636fc7f-351b-4c19-8629-6e1db8e522db", 00:17:01.066 "assigned_rate_limits": { 00:17:01.066 "rw_ios_per_sec": 0, 00:17:01.066 "rw_mbytes_per_sec": 0, 00:17:01.066 "r_mbytes_per_sec": 0, 00:17:01.066 "w_mbytes_per_sec": 0 00:17:01.066 }, 00:17:01.066 "claimed": false, 00:17:01.066 "zoned": false, 00:17:01.066 "supported_io_types": { 00:17:01.066 "read": true, 00:17:01.066 "write": true, 00:17:01.066 "unmap": true, 00:17:01.066 "write_zeroes": true, 00:17:01.066 "flush": true, 00:17:01.066 "reset": true, 00:17:01.066 "compare": false, 00:17:01.066 "compare_and_write": false, 00:17:01.066 "abort": true, 00:17:01.066 "nvme_admin": false, 00:17:01.066 "nvme_io": false 00:17:01.066 }, 00:17:01.066 "memory_domains": [ 00:17:01.066 { 00:17:01.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.066 "dma_device_type": 2 00:17:01.066 } 00:17:01.066 ], 00:17:01.066 "driver_specific": {} 00:17:01.066 } 00:17:01.066 ] 00:17:01.066 16:55:49 -- common/autotest_common.sh@905 -- # return 0 00:17:01.066 16:55:49 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:01.066 [2024-11-05 16:55:49.961886] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.324 [2024-11-05 16:55:49.963929] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.324 [2024-11-05 16:55:49.964139] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.324 [2024-11-05 16:55:49.964248] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:01.324 [2024-11-05 16:55:49.964314] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:01.324 16:55:49 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:01.324 16:55:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:01.324 16:55:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:01.324 16:55:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:01.324 16:55:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:01.324 16:55:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:01.324 16:55:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:01.324 16:55:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:01.324 16:55:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:01.324 16:55:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:01.324 16:55:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:01.324 16:55:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:01.324 16:55:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.324 16:55:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.324 16:55:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:01.324 "name": "Existed_Raid", 00:17:01.324 "uuid": "1e1c3cd7-d493-4cab-b78b-fa08c0f5f517", 00:17:01.324 "strip_size_kb": 64, 00:17:01.324 "state": "configuring", 00:17:01.324 "raid_level": "concat", 00:17:01.324 "superblock": true, 00:17:01.324 "num_base_bdevs": 3, 00:17:01.324 "num_base_bdevs_discovered": 1, 00:17:01.324 "num_base_bdevs_operational": 3, 00:17:01.324 "base_bdevs_list": [ 00:17:01.324 { 00:17:01.324 "name": "BaseBdev1", 00:17:01.324 "uuid": "4636fc7f-351b-4c19-8629-6e1db8e522db", 00:17:01.324 "is_configured": true, 00:17:01.324 "data_offset": 2048, 00:17:01.324 "data_size": 63488 00:17:01.324 }, 00:17:01.324 { 00:17:01.324 "name": "BaseBdev2", 00:17:01.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.324 "is_configured": false, 00:17:01.324 "data_offset": 0, 00:17:01.324 "data_size": 0 00:17:01.324 }, 00:17:01.324 { 00:17:01.324 "name": "BaseBdev3", 00:17:01.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.324 "is_configured": false, 00:17:01.324 "data_offset": 0, 00:17:01.324 "data_size": 0 00:17:01.324 } 00:17:01.324 ] 00:17:01.324 }' 00:17:01.324 16:55:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:01.324 16:55:50 -- common/autotest_common.sh@10 -- # set +x 00:17:02.258 16:55:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:02.258 [2024-11-05 16:55:51.139074] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:02.258 BaseBdev2 00:17:02.516 16:55:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:02.516 16:55:51 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:02.516 16:55:51 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:02.516 16:55:51 -- common/autotest_common.sh@899 -- # local i 00:17:02.516 16:55:51 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:02.516 16:55:51 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:02.516 16:55:51 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:02.516 16:55:51 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:02.774 [ 00:17:02.774 { 00:17:02.774 "name": "BaseBdev2", 00:17:02.774 "aliases": [ 00:17:02.774 "5046bebd-5410-4291-b5b5-150da0024f28" 00:17:02.774 ], 00:17:02.774 "product_name": "Malloc disk", 00:17:02.774 "block_size": 512, 00:17:02.774 "num_blocks": 65536, 00:17:02.774 "uuid": "5046bebd-5410-4291-b5b5-150da0024f28", 00:17:02.774 "assigned_rate_limits": { 00:17:02.774 "rw_ios_per_sec": 0, 00:17:02.774 "rw_mbytes_per_sec": 0, 00:17:02.774 "r_mbytes_per_sec": 0, 00:17:02.774 "w_mbytes_per_sec": 0 00:17:02.774 }, 00:17:02.774 "claimed": true, 00:17:02.774 "claim_type": "exclusive_write", 00:17:02.774 "zoned": false, 00:17:02.774 "supported_io_types": { 00:17:02.774 "read": true, 00:17:02.774 "write": true, 00:17:02.774 "unmap": true, 00:17:02.774 "write_zeroes": true, 00:17:02.774 "flush": true, 00:17:02.774 "reset": true, 00:17:02.774 "compare": false, 00:17:02.774 "compare_and_write": false, 00:17:02.774 "abort": true, 00:17:02.774 "nvme_admin": false, 00:17:02.774 "nvme_io": false 00:17:02.774 }, 00:17:02.774 "memory_domains": [ 00:17:02.774 { 00:17:02.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.774 "dma_device_type": 2 00:17:02.774 } 00:17:02.774 ], 00:17:02.774 "driver_specific": {} 00:17:02.774 } 00:17:02.774 ] 00:17:02.774 16:55:51 -- common/autotest_common.sh@905 -- # return 0 00:17:02.774 16:55:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:02.774 16:55:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:02.774 16:55:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:02.774 16:55:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:02.774 16:55:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:02.774 16:55:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:02.774 16:55:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:02.774 16:55:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:02.774 16:55:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:02.774 16:55:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:02.774 16:55:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:02.774 16:55:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:02.774 16:55:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.774 16:55:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.032 16:55:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:03.032 "name": "Existed_Raid", 00:17:03.032 "uuid": "1e1c3cd7-d493-4cab-b78b-fa08c0f5f517", 00:17:03.032 "strip_size_kb": 64, 00:17:03.032 "state": "configuring", 00:17:03.032 "raid_level": "concat", 00:17:03.032 "superblock": true, 00:17:03.032 "num_base_bdevs": 3, 00:17:03.032 "num_base_bdevs_discovered": 2, 00:17:03.032 "num_base_bdevs_operational": 3, 00:17:03.032 "base_bdevs_list": [ 00:17:03.032 { 00:17:03.032 "name": "BaseBdev1", 00:17:03.032 "uuid": "4636fc7f-351b-4c19-8629-6e1db8e522db", 00:17:03.032 "is_configured": true, 00:17:03.032 "data_offset": 2048, 00:17:03.032 "data_size": 63488 00:17:03.032 }, 00:17:03.032 { 00:17:03.032 "name": "BaseBdev2", 00:17:03.032 "uuid": "5046bebd-5410-4291-b5b5-150da0024f28", 00:17:03.032 "is_configured": true, 00:17:03.032 "data_offset": 2048, 00:17:03.032 "data_size": 63488 00:17:03.032 }, 00:17:03.032 { 00:17:03.032 "name": "BaseBdev3", 00:17:03.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.032 "is_configured": false, 00:17:03.032 "data_offset": 0, 00:17:03.032 "data_size": 0 00:17:03.032 } 00:17:03.032 ] 00:17:03.032 }' 00:17:03.032 16:55:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:03.032 16:55:51 -- common/autotest_common.sh@10 -- # set +x 00:17:03.963 16:55:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:04.221 [2024-11-05 16:55:52.896345] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:04.221 [2024-11-05 16:55:52.896823] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:17:04.221 [2024-11-05 16:55:52.896979] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:04.221 [2024-11-05 16:55:52.897153] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:04.221 BaseBdev3 00:17:04.221 [2024-11-05 16:55:52.897533] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:17:04.221 [2024-11-05 16:55:52.897695] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:17:04.221 [2024-11-05 16:55:52.897883] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.221 16:55:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:04.221 16:55:52 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:04.221 16:55:52 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:04.221 16:55:52 -- common/autotest_common.sh@899 -- # local i 00:17:04.221 16:55:52 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:04.221 16:55:52 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:04.221 16:55:52 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:04.479 16:55:53 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:04.738 [ 00:17:04.738 { 00:17:04.738 "name": "BaseBdev3", 00:17:04.738 "aliases": [ 00:17:04.738 "0a293c53-2606-4079-b402-edc27078378e" 00:17:04.738 ], 00:17:04.738 "product_name": "Malloc disk", 00:17:04.738 "block_size": 512, 00:17:04.738 "num_blocks": 65536, 00:17:04.738 "uuid": "0a293c53-2606-4079-b402-edc27078378e", 00:17:04.738 "assigned_rate_limits": { 00:17:04.738 "rw_ios_per_sec": 0, 00:17:04.738 "rw_mbytes_per_sec": 0, 00:17:04.738 "r_mbytes_per_sec": 0, 00:17:04.738 "w_mbytes_per_sec": 0 00:17:04.738 }, 00:17:04.738 "claimed": true, 00:17:04.738 "claim_type": "exclusive_write", 00:17:04.738 "zoned": false, 00:17:04.738 "supported_io_types": { 00:17:04.738 "read": true, 00:17:04.738 "write": true, 00:17:04.738 "unmap": true, 00:17:04.738 "write_zeroes": true, 00:17:04.738 "flush": true, 00:17:04.738 "reset": true, 00:17:04.738 "compare": false, 00:17:04.738 "compare_and_write": false, 00:17:04.738 "abort": true, 00:17:04.738 "nvme_admin": false, 00:17:04.738 "nvme_io": false 00:17:04.738 }, 00:17:04.738 "memory_domains": [ 00:17:04.738 { 00:17:04.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.738 "dma_device_type": 2 00:17:04.738 } 00:17:04.738 ], 00:17:04.738 "driver_specific": {} 00:17:04.738 } 00:17:04.738 ] 00:17:04.738 16:55:53 -- common/autotest_common.sh@905 -- # return 0 00:17:04.738 16:55:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:04.738 16:55:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:04.738 16:55:53 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:04.738 16:55:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:04.738 16:55:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:04.738 16:55:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:04.738 16:55:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:04.738 16:55:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:04.738 16:55:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:04.738 16:55:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:04.738 16:55:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:04.738 16:55:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:04.738 16:55:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.738 16:55:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.996 16:55:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:04.996 "name": "Existed_Raid", 00:17:04.996 "uuid": "1e1c3cd7-d493-4cab-b78b-fa08c0f5f517", 00:17:04.996 "strip_size_kb": 64, 00:17:04.996 "state": "online", 00:17:04.996 "raid_level": "concat", 00:17:04.996 "superblock": true, 00:17:04.996 "num_base_bdevs": 3, 00:17:04.996 "num_base_bdevs_discovered": 3, 00:17:04.996 "num_base_bdevs_operational": 3, 00:17:04.996 "base_bdevs_list": [ 00:17:04.996 { 00:17:04.996 "name": "BaseBdev1", 00:17:04.996 "uuid": "4636fc7f-351b-4c19-8629-6e1db8e522db", 00:17:04.996 "is_configured": true, 00:17:04.996 "data_offset": 2048, 00:17:04.996 "data_size": 63488 00:17:04.996 }, 00:17:04.996 { 00:17:04.996 "name": "BaseBdev2", 00:17:04.996 "uuid": "5046bebd-5410-4291-b5b5-150da0024f28", 00:17:04.996 "is_configured": true, 00:17:04.996 "data_offset": 2048, 00:17:04.996 "data_size": 63488 00:17:04.996 }, 00:17:04.996 { 00:17:04.996 "name": "BaseBdev3", 00:17:04.996 "uuid": "0a293c53-2606-4079-b402-edc27078378e", 00:17:04.996 "is_configured": true, 00:17:04.996 "data_offset": 2048, 00:17:04.996 "data_size": 63488 00:17:04.996 } 00:17:04.996 ] 00:17:04.996 }' 00:17:04.996 16:55:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:04.996 16:55:53 -- common/autotest_common.sh@10 -- # set +x 00:17:05.561 16:55:54 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:05.820 [2024-11-05 16:55:54.468822] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:05.820 [2024-11-05 16:55:54.469087] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.820 [2024-11-05 16:55:54.469251] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.820 16:55:54 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:05.820 16:55:54 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:05.820 16:55:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:05.820 16:55:54 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:05.820 16:55:54 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:05.820 16:55:54 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:05.820 16:55:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:05.820 16:55:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:05.820 16:55:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:05.820 16:55:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:05.820 16:55:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:05.820 16:55:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:05.820 16:55:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:05.820 16:55:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:05.820 16:55:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:05.820 16:55:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.820 16:55:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.078 16:55:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:06.078 "name": "Existed_Raid", 00:17:06.078 "uuid": "1e1c3cd7-d493-4cab-b78b-fa08c0f5f517", 00:17:06.078 "strip_size_kb": 64, 00:17:06.078 "state": "offline", 00:17:06.078 "raid_level": "concat", 00:17:06.078 "superblock": true, 00:17:06.078 "num_base_bdevs": 3, 00:17:06.078 "num_base_bdevs_discovered": 2, 00:17:06.078 "num_base_bdevs_operational": 2, 00:17:06.078 "base_bdevs_list": [ 00:17:06.078 { 00:17:06.078 "name": null, 00:17:06.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.078 "is_configured": false, 00:17:06.078 "data_offset": 2048, 00:17:06.078 "data_size": 63488 00:17:06.078 }, 00:17:06.078 { 00:17:06.078 "name": "BaseBdev2", 00:17:06.078 "uuid": "5046bebd-5410-4291-b5b5-150da0024f28", 00:17:06.078 "is_configured": true, 00:17:06.078 "data_offset": 2048, 00:17:06.078 "data_size": 63488 00:17:06.078 }, 00:17:06.078 { 00:17:06.078 "name": "BaseBdev3", 00:17:06.078 "uuid": "0a293c53-2606-4079-b402-edc27078378e", 00:17:06.078 "is_configured": true, 00:17:06.078 "data_offset": 2048, 00:17:06.078 "data_size": 63488 00:17:06.078 } 00:17:06.078 ] 00:17:06.078 }' 00:17:06.078 16:55:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:06.078 16:55:54 -- common/autotest_common.sh@10 -- # set +x 00:17:06.644 16:55:55 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:06.644 16:55:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:06.644 16:55:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.644 16:55:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:06.901 16:55:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:06.901 16:55:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:06.901 16:55:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:07.158 [2024-11-05 16:55:55.970359] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:07.416 16:55:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:07.416 16:55:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:07.416 16:55:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.416 16:55:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:07.674 16:55:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:07.674 16:55:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:07.674 16:55:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:07.674 [2024-11-05 16:55:56.501143] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:07.674 [2024-11-05 16:55:56.501359] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:17:07.932 16:55:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:07.932 16:55:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:07.932 16:55:56 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.932 16:55:56 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:08.190 16:55:56 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:08.190 16:55:56 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:08.190 16:55:56 -- bdev/bdev_raid.sh@287 -- # killprocess 116335 00:17:08.190 16:55:56 -- common/autotest_common.sh@936 -- # '[' -z 116335 ']' 00:17:08.190 16:55:56 -- common/autotest_common.sh@940 -- # kill -0 116335 00:17:08.190 16:55:56 -- common/autotest_common.sh@941 -- # uname 00:17:08.190 16:55:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:08.190 16:55:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116335 00:17:08.190 16:55:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:08.190 16:55:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:08.190 16:55:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116335' 00:17:08.190 killing process with pid 116335 00:17:08.190 16:55:56 -- common/autotest_common.sh@955 -- # kill 116335 00:17:08.190 [2024-11-05 16:55:56.859063] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:08.190 16:55:56 -- common/autotest_common.sh@960 -- # wait 116335 00:17:08.190 [2024-11-05 16:55:56.859355] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:09.124 00:17:09.124 real 0m13.773s 00:17:09.124 user 0m24.606s 00:17:09.124 sys 0m1.500s 00:17:09.124 ************************************ 00:17:09.124 END TEST raid_state_function_test_sb 00:17:09.124 ************************************ 00:17:09.124 16:55:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:09.124 16:55:57 -- common/autotest_common.sh@10 -- # set +x 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:17:09.124 16:55:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:09.124 16:55:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:09.124 16:55:57 -- common/autotest_common.sh@10 -- # set +x 00:17:09.124 ************************************ 00:17:09.124 START TEST raid_superblock_test 00:17:09.124 ************************************ 00:17:09.124 16:55:57 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 3 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@357 -- # raid_pid=116742 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@358 -- # waitforlisten 116742 /var/tmp/spdk-raid.sock 00:17:09.124 16:55:57 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:09.124 16:55:57 -- common/autotest_common.sh@829 -- # '[' -z 116742 ']' 00:17:09.124 16:55:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:09.124 16:55:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.124 16:55:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:09.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:09.124 16:55:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.124 16:55:57 -- common/autotest_common.sh@10 -- # set +x 00:17:09.124 [2024-11-05 16:55:57.934773] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:09.124 [2024-11-05 16:55:57.935247] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116742 ] 00:17:09.382 [2024-11-05 16:55:58.105474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.640 [2024-11-05 16:55:58.283139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.640 [2024-11-05 16:55:58.461001] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:10.213 16:55:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.213 16:55:58 -- common/autotest_common.sh@862 -- # return 0 00:17:10.213 16:55:58 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:10.213 16:55:58 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:10.213 16:55:58 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:10.213 16:55:58 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:10.213 16:55:58 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:10.213 16:55:58 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:10.213 16:55:58 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:10.213 16:55:58 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:10.213 16:55:58 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:10.471 malloc1 00:17:10.471 16:55:59 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:10.471 [2024-11-05 16:55:59.323537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:10.471 [2024-11-05 16:55:59.323853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.471 [2024-11-05 16:55:59.324042] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:17:10.471 [2024-11-05 16:55:59.324216] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.471 [2024-11-05 16:55:59.326914] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.471 [2024-11-05 16:55:59.327110] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:10.471 pt1 00:17:10.471 16:55:59 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:10.471 16:55:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:10.471 16:55:59 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:10.471 16:55:59 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:10.471 16:55:59 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:10.471 16:55:59 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:10.471 16:55:59 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:10.471 16:55:59 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:10.471 16:55:59 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:11.037 malloc2 00:17:11.037 16:55:59 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:11.037 [2024-11-05 16:55:59.841749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:11.037 [2024-11-05 16:55:59.841979] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.037 [2024-11-05 16:55:59.842060] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:11.037 [2024-11-05 16:55:59.842359] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.037 [2024-11-05 16:55:59.844740] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.037 [2024-11-05 16:55:59.844920] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:11.037 pt2 00:17:11.037 16:55:59 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:11.037 16:55:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:11.037 16:55:59 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:11.037 16:55:59 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:11.037 16:55:59 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:11.037 16:55:59 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:11.037 16:55:59 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:11.037 16:55:59 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:11.037 16:55:59 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:11.296 malloc3 00:17:11.296 16:56:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:11.554 [2024-11-05 16:56:00.308042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:11.554 [2024-11-05 16:56:00.308270] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.554 [2024-11-05 16:56:00.308416] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:11.554 [2024-11-05 16:56:00.308562] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.554 [2024-11-05 16:56:00.311073] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.554 [2024-11-05 16:56:00.311269] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:11.554 pt3 00:17:11.554 16:56:00 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:11.554 16:56:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:11.555 16:56:00 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:11.813 [2024-11-05 16:56:00.500138] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:11.813 [2024-11-05 16:56:00.502111] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:11.813 [2024-11-05 16:56:00.502332] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:11.813 [2024-11-05 16:56:00.502698] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:17:11.813 [2024-11-05 16:56:00.502849] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:11.813 [2024-11-05 16:56:00.503145] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:11.813 [2024-11-05 16:56:00.503644] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:17:11.813 [2024-11-05 16:56:00.503768] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:17:11.813 [2024-11-05 16:56:00.503981] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.813 16:56:00 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:11.813 16:56:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:11.813 16:56:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:11.813 16:56:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:11.813 16:56:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:11.813 16:56:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:11.813 16:56:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.813 16:56:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.813 16:56:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.813 16:56:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.813 16:56:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.813 16:56:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.071 16:56:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.071 "name": "raid_bdev1", 00:17:12.071 "uuid": "938ed3fe-5ef2-48eb-b94d-21e2a596d841", 00:17:12.071 "strip_size_kb": 64, 00:17:12.071 "state": "online", 00:17:12.071 "raid_level": "concat", 00:17:12.071 "superblock": true, 00:17:12.071 "num_base_bdevs": 3, 00:17:12.071 "num_base_bdevs_discovered": 3, 00:17:12.071 "num_base_bdevs_operational": 3, 00:17:12.071 "base_bdevs_list": [ 00:17:12.071 { 00:17:12.071 "name": "pt1", 00:17:12.071 "uuid": "fcbb35d4-2204-59e6-b3c2-22763ba23867", 00:17:12.071 "is_configured": true, 00:17:12.071 "data_offset": 2048, 00:17:12.071 "data_size": 63488 00:17:12.071 }, 00:17:12.071 { 00:17:12.071 "name": "pt2", 00:17:12.071 "uuid": "599b9c48-7226-524b-bb92-0d2f412adf64", 00:17:12.071 "is_configured": true, 00:17:12.071 "data_offset": 2048, 00:17:12.071 "data_size": 63488 00:17:12.071 }, 00:17:12.071 { 00:17:12.071 "name": "pt3", 00:17:12.071 "uuid": "5e2ba977-573b-5e0f-a93b-a8ae0570dce7", 00:17:12.071 "is_configured": true, 00:17:12.071 "data_offset": 2048, 00:17:12.071 "data_size": 63488 00:17:12.071 } 00:17:12.071 ] 00:17:12.071 }' 00:17:12.071 16:56:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.071 16:56:00 -- common/autotest_common.sh@10 -- # set +x 00:17:12.637 16:56:01 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:12.637 16:56:01 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:12.895 [2024-11-05 16:56:01.572507] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.895 16:56:01 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=938ed3fe-5ef2-48eb-b94d-21e2a596d841 00:17:12.895 16:56:01 -- bdev/bdev_raid.sh@380 -- # '[' -z 938ed3fe-5ef2-48eb-b94d-21e2a596d841 ']' 00:17:12.895 16:56:01 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:13.153 [2024-11-05 16:56:01.820381] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.153 [2024-11-05 16:56:01.820542] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.153 [2024-11-05 16:56:01.820752] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.153 [2024-11-05 16:56:01.820921] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.153 [2024-11-05 16:56:01.821018] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:17:13.153 16:56:01 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.153 16:56:01 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:13.411 16:56:02 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:13.411 16:56:02 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:13.411 16:56:02 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:13.411 16:56:02 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:13.411 16:56:02 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:13.411 16:56:02 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:13.669 16:56:02 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:13.669 16:56:02 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:13.928 16:56:02 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:13.928 16:56:02 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:14.186 16:56:03 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:14.186 16:56:03 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:14.186 16:56:03 -- common/autotest_common.sh@650 -- # local es=0 00:17:14.186 16:56:03 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:14.186 16:56:03 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:14.186 16:56:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.186 16:56:03 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:14.186 16:56:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.186 16:56:03 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:14.186 16:56:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.186 16:56:03 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:14.186 16:56:03 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:14.186 16:56:03 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:14.444 [2024-11-05 16:56:03.264702] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:14.444 [2024-11-05 16:56:03.267145] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:14.444 [2024-11-05 16:56:03.267357] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:14.444 [2024-11-05 16:56:03.267456] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:14.444 [2024-11-05 16:56:03.267776] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:14.444 [2024-11-05 16:56:03.267935] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:14.444 [2024-11-05 16:56:03.268092] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.444 [2024-11-05 16:56:03.268192] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:17:14.444 request: 00:17:14.444 { 00:17:14.444 "name": "raid_bdev1", 00:17:14.445 "raid_level": "concat", 00:17:14.445 "base_bdevs": [ 00:17:14.445 "malloc1", 00:17:14.445 "malloc2", 00:17:14.445 "malloc3" 00:17:14.445 ], 00:17:14.445 "superblock": false, 00:17:14.445 "strip_size_kb": 64, 00:17:14.445 "method": "bdev_raid_create", 00:17:14.445 "req_id": 1 00:17:14.445 } 00:17:14.445 Got JSON-RPC error response 00:17:14.445 response: 00:17:14.445 { 00:17:14.445 "code": -17, 00:17:14.445 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:14.445 } 00:17:14.445 16:56:03 -- common/autotest_common.sh@653 -- # es=1 00:17:14.445 16:56:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:14.445 16:56:03 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:14.445 16:56:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:14.445 16:56:03 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.445 16:56:03 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:14.703 16:56:03 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:14.703 16:56:03 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:14.703 16:56:03 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:14.961 [2024-11-05 16:56:03.708827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:14.961 [2024-11-05 16:56:03.709092] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.961 [2024-11-05 16:56:03.709198] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:14.961 [2024-11-05 16:56:03.709463] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.961 [2024-11-05 16:56:03.711919] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.961 [2024-11-05 16:56:03.712104] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:14.961 [2024-11-05 16:56:03.712357] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:14.962 [2024-11-05 16:56:03.712507] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:14.962 pt1 00:17:14.962 16:56:03 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:14.962 16:56:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:14.962 16:56:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:14.962 16:56:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:14.962 16:56:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:14.962 16:56:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:14.962 16:56:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.962 16:56:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.962 16:56:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.962 16:56:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.962 16:56:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.962 16:56:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.220 16:56:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:15.220 "name": "raid_bdev1", 00:17:15.220 "uuid": "938ed3fe-5ef2-48eb-b94d-21e2a596d841", 00:17:15.220 "strip_size_kb": 64, 00:17:15.220 "state": "configuring", 00:17:15.220 "raid_level": "concat", 00:17:15.220 "superblock": true, 00:17:15.220 "num_base_bdevs": 3, 00:17:15.220 "num_base_bdevs_discovered": 1, 00:17:15.220 "num_base_bdevs_operational": 3, 00:17:15.220 "base_bdevs_list": [ 00:17:15.220 { 00:17:15.220 "name": "pt1", 00:17:15.220 "uuid": "fcbb35d4-2204-59e6-b3c2-22763ba23867", 00:17:15.220 "is_configured": true, 00:17:15.220 "data_offset": 2048, 00:17:15.220 "data_size": 63488 00:17:15.220 }, 00:17:15.220 { 00:17:15.220 "name": null, 00:17:15.220 "uuid": "599b9c48-7226-524b-bb92-0d2f412adf64", 00:17:15.220 "is_configured": false, 00:17:15.220 "data_offset": 2048, 00:17:15.220 "data_size": 63488 00:17:15.220 }, 00:17:15.220 { 00:17:15.220 "name": null, 00:17:15.220 "uuid": "5e2ba977-573b-5e0f-a93b-a8ae0570dce7", 00:17:15.220 "is_configured": false, 00:17:15.220 "data_offset": 2048, 00:17:15.220 "data_size": 63488 00:17:15.220 } 00:17:15.220 ] 00:17:15.220 }' 00:17:15.220 16:56:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:15.220 16:56:03 -- common/autotest_common.sh@10 -- # set +x 00:17:15.790 16:56:04 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:15.790 16:56:04 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:16.048 [2024-11-05 16:56:04.821119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:16.048 [2024-11-05 16:56:04.821485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.048 [2024-11-05 16:56:04.821683] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:16.048 [2024-11-05 16:56:04.821807] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.048 [2024-11-05 16:56:04.822394] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.048 [2024-11-05 16:56:04.822561] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:16.048 [2024-11-05 16:56:04.822810] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:16.048 [2024-11-05 16:56:04.823062] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:16.048 pt2 00:17:16.048 16:56:04 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:16.306 [2024-11-05 16:56:05.121280] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:16.306 16:56:05 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:16.306 16:56:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:16.306 16:56:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:16.306 16:56:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:16.306 16:56:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:16.306 16:56:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:16.306 16:56:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:16.306 16:56:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:16.306 16:56:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:16.306 16:56:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:16.306 16:56:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.306 16:56:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.563 16:56:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:16.563 "name": "raid_bdev1", 00:17:16.563 "uuid": "938ed3fe-5ef2-48eb-b94d-21e2a596d841", 00:17:16.563 "strip_size_kb": 64, 00:17:16.563 "state": "configuring", 00:17:16.563 "raid_level": "concat", 00:17:16.563 "superblock": true, 00:17:16.564 "num_base_bdevs": 3, 00:17:16.564 "num_base_bdevs_discovered": 1, 00:17:16.564 "num_base_bdevs_operational": 3, 00:17:16.564 "base_bdevs_list": [ 00:17:16.564 { 00:17:16.564 "name": "pt1", 00:17:16.564 "uuid": "fcbb35d4-2204-59e6-b3c2-22763ba23867", 00:17:16.564 "is_configured": true, 00:17:16.564 "data_offset": 2048, 00:17:16.564 "data_size": 63488 00:17:16.564 }, 00:17:16.564 { 00:17:16.564 "name": null, 00:17:16.564 "uuid": "599b9c48-7226-524b-bb92-0d2f412adf64", 00:17:16.564 "is_configured": false, 00:17:16.564 "data_offset": 2048, 00:17:16.564 "data_size": 63488 00:17:16.564 }, 00:17:16.564 { 00:17:16.564 "name": null, 00:17:16.564 "uuid": "5e2ba977-573b-5e0f-a93b-a8ae0570dce7", 00:17:16.564 "is_configured": false, 00:17:16.564 "data_offset": 2048, 00:17:16.564 "data_size": 63488 00:17:16.564 } 00:17:16.564 ] 00:17:16.564 }' 00:17:16.564 16:56:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:16.564 16:56:05 -- common/autotest_common.sh@10 -- # set +x 00:17:17.495 16:56:06 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:17.495 16:56:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:17.495 16:56:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:17.753 [2024-11-05 16:56:06.429583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:17.753 [2024-11-05 16:56:06.429879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.753 [2024-11-05 16:56:06.430081] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:17.753 [2024-11-05 16:56:06.430249] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.753 [2024-11-05 16:56:06.431032] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.753 [2024-11-05 16:56:06.431209] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:17.753 [2024-11-05 16:56:06.431509] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:17.753 [2024-11-05 16:56:06.431670] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:17.753 pt2 00:17:17.753 16:56:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:17.753 16:56:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:17.753 16:56:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:18.010 [2024-11-05 16:56:06.673653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:18.010 [2024-11-05 16:56:06.673961] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.010 [2024-11-05 16:56:06.674113] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:18.010 [2024-11-05 16:56:06.674232] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.010 [2024-11-05 16:56:06.674946] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.010 [2024-11-05 16:56:06.675166] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:18.010 [2024-11-05 16:56:06.675421] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:18.010 [2024-11-05 16:56:06.675545] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:18.010 [2024-11-05 16:56:06.675716] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:17:18.010 [2024-11-05 16:56:06.675832] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:18.010 [2024-11-05 16:56:06.676053] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:18.010 [2024-11-05 16:56:06.676495] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:17:18.010 [2024-11-05 16:56:06.676621] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:17:18.010 [2024-11-05 16:56:06.676837] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.010 pt3 00:17:18.010 16:56:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:18.010 16:56:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:18.010 16:56:06 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:18.010 16:56:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:18.010 16:56:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:18.010 16:56:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:18.010 16:56:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:18.010 16:56:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:18.010 16:56:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:18.010 16:56:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:18.010 16:56:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:18.010 16:56:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:18.010 16:56:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.011 16:56:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.268 16:56:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:18.268 "name": "raid_bdev1", 00:17:18.268 "uuid": "938ed3fe-5ef2-48eb-b94d-21e2a596d841", 00:17:18.268 "strip_size_kb": 64, 00:17:18.268 "state": "online", 00:17:18.268 "raid_level": "concat", 00:17:18.268 "superblock": true, 00:17:18.268 "num_base_bdevs": 3, 00:17:18.268 "num_base_bdevs_discovered": 3, 00:17:18.268 "num_base_bdevs_operational": 3, 00:17:18.268 "base_bdevs_list": [ 00:17:18.268 { 00:17:18.268 "name": "pt1", 00:17:18.268 "uuid": "fcbb35d4-2204-59e6-b3c2-22763ba23867", 00:17:18.268 "is_configured": true, 00:17:18.268 "data_offset": 2048, 00:17:18.268 "data_size": 63488 00:17:18.268 }, 00:17:18.268 { 00:17:18.268 "name": "pt2", 00:17:18.268 "uuid": "599b9c48-7226-524b-bb92-0d2f412adf64", 00:17:18.268 "is_configured": true, 00:17:18.268 "data_offset": 2048, 00:17:18.268 "data_size": 63488 00:17:18.268 }, 00:17:18.268 { 00:17:18.268 "name": "pt3", 00:17:18.268 "uuid": "5e2ba977-573b-5e0f-a93b-a8ae0570dce7", 00:17:18.268 "is_configured": true, 00:17:18.268 "data_offset": 2048, 00:17:18.268 "data_size": 63488 00:17:18.268 } 00:17:18.268 ] 00:17:18.268 }' 00:17:18.268 16:56:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:18.268 16:56:06 -- common/autotest_common.sh@10 -- # set +x 00:17:18.833 16:56:07 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:18.833 16:56:07 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:19.091 [2024-11-05 16:56:07.862134] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.091 16:56:07 -- bdev/bdev_raid.sh@430 -- # '[' 938ed3fe-5ef2-48eb-b94d-21e2a596d841 '!=' 938ed3fe-5ef2-48eb-b94d-21e2a596d841 ']' 00:17:19.091 16:56:07 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:17:19.091 16:56:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:19.091 16:56:07 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:19.091 16:56:07 -- bdev/bdev_raid.sh@511 -- # killprocess 116742 00:17:19.091 16:56:07 -- common/autotest_common.sh@936 -- # '[' -z 116742 ']' 00:17:19.091 16:56:07 -- common/autotest_common.sh@940 -- # kill -0 116742 00:17:19.091 16:56:07 -- common/autotest_common.sh@941 -- # uname 00:17:19.091 16:56:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:19.091 16:56:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116742 00:17:19.091 killing process with pid 116742 00:17:19.091 16:56:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:19.091 16:56:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:19.091 16:56:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116742' 00:17:19.091 16:56:07 -- common/autotest_common.sh@955 -- # kill 116742 00:17:19.091 [2024-11-05 16:56:07.899879] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:19.091 16:56:07 -- common/autotest_common.sh@960 -- # wait 116742 00:17:19.091 [2024-11-05 16:56:07.899946] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.091 [2024-11-05 16:56:07.900036] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.091 [2024-11-05 16:56:07.900062] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:17:19.349 [2024-11-05 16:56:08.101394] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:20.283 ************************************ 00:17:20.283 END TEST raid_superblock_test 00:17:20.283 ************************************ 00:17:20.283 16:56:09 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:20.283 00:17:20.283 real 0m11.193s 00:17:20.283 user 0m19.651s 00:17:20.283 sys 0m1.317s 00:17:20.283 16:56:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:20.283 16:56:09 -- common/autotest_common.sh@10 -- # set +x 00:17:20.283 16:56:09 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:20.283 16:56:09 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:17:20.283 16:56:09 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:20.283 16:56:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:20.283 16:56:09 -- common/autotest_common.sh@10 -- # set +x 00:17:20.283 ************************************ 00:17:20.284 START TEST raid_state_function_test 00:17:20.284 ************************************ 00:17:20.284 16:56:09 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 false 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@226 -- # raid_pid=117059 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 117059' 00:17:20.284 Process raid pid: 117059 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:20.284 16:56:09 -- bdev/bdev_raid.sh@228 -- # waitforlisten 117059 /var/tmp/spdk-raid.sock 00:17:20.284 16:56:09 -- common/autotest_common.sh@829 -- # '[' -z 117059 ']' 00:17:20.284 16:56:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:20.284 16:56:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:20.284 16:56:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:20.284 16:56:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.284 16:56:09 -- common/autotest_common.sh@10 -- # set +x 00:17:20.542 [2024-11-05 16:56:09.194542] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:20.542 [2024-11-05 16:56:09.195606] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.542 [2024-11-05 16:56:09.359373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.800 [2024-11-05 16:56:09.551484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.058 [2024-11-05 16:56:09.753981] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:21.316 16:56:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.316 16:56:10 -- common/autotest_common.sh@862 -- # return 0 00:17:21.316 16:56:10 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:21.574 [2024-11-05 16:56:10.281460] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:21.574 [2024-11-05 16:56:10.281828] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:21.574 [2024-11-05 16:56:10.281951] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:21.574 [2024-11-05 16:56:10.282014] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:21.574 [2024-11-05 16:56:10.282112] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:21.574 [2024-11-05 16:56:10.282196] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:21.574 16:56:10 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:21.574 16:56:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:21.574 16:56:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:21.574 16:56:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:21.574 16:56:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:21.574 16:56:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:21.574 16:56:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:21.574 16:56:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:21.574 16:56:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:21.574 16:56:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:21.574 16:56:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.574 16:56:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.833 16:56:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:21.833 "name": "Existed_Raid", 00:17:21.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.833 "strip_size_kb": 0, 00:17:21.833 "state": "configuring", 00:17:21.833 "raid_level": "raid1", 00:17:21.833 "superblock": false, 00:17:21.833 "num_base_bdevs": 3, 00:17:21.833 "num_base_bdevs_discovered": 0, 00:17:21.833 "num_base_bdevs_operational": 3, 00:17:21.833 "base_bdevs_list": [ 00:17:21.833 { 00:17:21.833 "name": "BaseBdev1", 00:17:21.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.833 "is_configured": false, 00:17:21.833 "data_offset": 0, 00:17:21.833 "data_size": 0 00:17:21.833 }, 00:17:21.833 { 00:17:21.833 "name": "BaseBdev2", 00:17:21.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.833 "is_configured": false, 00:17:21.833 "data_offset": 0, 00:17:21.833 "data_size": 0 00:17:21.833 }, 00:17:21.833 { 00:17:21.833 "name": "BaseBdev3", 00:17:21.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.833 "is_configured": false, 00:17:21.833 "data_offset": 0, 00:17:21.833 "data_size": 0 00:17:21.833 } 00:17:21.833 ] 00:17:21.833 }' 00:17:21.833 16:56:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:21.833 16:56:10 -- common/autotest_common.sh@10 -- # set +x 00:17:22.400 16:56:11 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:22.659 [2024-11-05 16:56:11.389641] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:22.659 [2024-11-05 16:56:11.389909] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:17:22.659 16:56:11 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:22.917 [2024-11-05 16:56:11.597736] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:22.917 [2024-11-05 16:56:11.598061] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:22.917 [2024-11-05 16:56:11.598181] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:22.917 [2024-11-05 16:56:11.598270] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:22.917 [2024-11-05 16:56:11.598503] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:22.917 [2024-11-05 16:56:11.598607] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:22.917 16:56:11 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:23.175 [2024-11-05 16:56:11.891718] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:23.175 BaseBdev1 00:17:23.175 16:56:11 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:23.175 16:56:11 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:23.175 16:56:11 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:23.175 16:56:11 -- common/autotest_common.sh@899 -- # local i 00:17:23.175 16:56:11 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:23.175 16:56:11 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:23.175 16:56:11 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:23.433 16:56:12 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:23.692 [ 00:17:23.692 { 00:17:23.692 "name": "BaseBdev1", 00:17:23.692 "aliases": [ 00:17:23.692 "8865b259-d6f1-4af5-a840-cfe881354d20" 00:17:23.692 ], 00:17:23.692 "product_name": "Malloc disk", 00:17:23.692 "block_size": 512, 00:17:23.692 "num_blocks": 65536, 00:17:23.692 "uuid": "8865b259-d6f1-4af5-a840-cfe881354d20", 00:17:23.692 "assigned_rate_limits": { 00:17:23.692 "rw_ios_per_sec": 0, 00:17:23.692 "rw_mbytes_per_sec": 0, 00:17:23.692 "r_mbytes_per_sec": 0, 00:17:23.692 "w_mbytes_per_sec": 0 00:17:23.692 }, 00:17:23.692 "claimed": true, 00:17:23.692 "claim_type": "exclusive_write", 00:17:23.692 "zoned": false, 00:17:23.692 "supported_io_types": { 00:17:23.692 "read": true, 00:17:23.692 "write": true, 00:17:23.692 "unmap": true, 00:17:23.692 "write_zeroes": true, 00:17:23.692 "flush": true, 00:17:23.692 "reset": true, 00:17:23.692 "compare": false, 00:17:23.692 "compare_and_write": false, 00:17:23.692 "abort": true, 00:17:23.692 "nvme_admin": false, 00:17:23.692 "nvme_io": false 00:17:23.692 }, 00:17:23.692 "memory_domains": [ 00:17:23.692 { 00:17:23.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.692 "dma_device_type": 2 00:17:23.692 } 00:17:23.692 ], 00:17:23.692 "driver_specific": {} 00:17:23.692 } 00:17:23.692 ] 00:17:23.692 16:56:12 -- common/autotest_common.sh@905 -- # return 0 00:17:23.692 16:56:12 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:23.692 16:56:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:23.692 16:56:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:23.692 16:56:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:23.692 16:56:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:23.692 16:56:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:23.692 16:56:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:23.692 16:56:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:23.692 16:56:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:23.692 16:56:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:23.692 16:56:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.692 16:56:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.692 16:56:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:23.692 "name": "Existed_Raid", 00:17:23.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.692 "strip_size_kb": 0, 00:17:23.692 "state": "configuring", 00:17:23.692 "raid_level": "raid1", 00:17:23.692 "superblock": false, 00:17:23.692 "num_base_bdevs": 3, 00:17:23.692 "num_base_bdevs_discovered": 1, 00:17:23.692 "num_base_bdevs_operational": 3, 00:17:23.692 "base_bdevs_list": [ 00:17:23.692 { 00:17:23.692 "name": "BaseBdev1", 00:17:23.692 "uuid": "8865b259-d6f1-4af5-a840-cfe881354d20", 00:17:23.692 "is_configured": true, 00:17:23.692 "data_offset": 0, 00:17:23.692 "data_size": 65536 00:17:23.692 }, 00:17:23.692 { 00:17:23.692 "name": "BaseBdev2", 00:17:23.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.692 "is_configured": false, 00:17:23.692 "data_offset": 0, 00:17:23.692 "data_size": 0 00:17:23.692 }, 00:17:23.692 { 00:17:23.692 "name": "BaseBdev3", 00:17:23.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.692 "is_configured": false, 00:17:23.692 "data_offset": 0, 00:17:23.692 "data_size": 0 00:17:23.692 } 00:17:23.692 ] 00:17:23.692 }' 00:17:23.692 16:56:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:23.692 16:56:12 -- common/autotest_common.sh@10 -- # set +x 00:17:24.628 16:56:13 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:24.628 [2024-11-05 16:56:13.420303] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.628 [2024-11-05 16:56:13.420573] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:24.628 16:56:13 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:24.628 16:56:13 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:24.886 [2024-11-05 16:56:13.660380] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.886 [2024-11-05 16:56:13.662640] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.886 [2024-11-05 16:56:13.662874] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.886 [2024-11-05 16:56:13.663011] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:24.886 [2024-11-05 16:56:13.663082] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:24.886 16:56:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:24.886 16:56:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:24.886 16:56:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:24.886 16:56:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:24.886 16:56:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:24.886 16:56:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:24.886 16:56:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:24.886 16:56:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:24.886 16:56:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:24.886 16:56:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:24.886 16:56:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:24.886 16:56:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:24.886 16:56:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.886 16:56:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.145 16:56:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:25.145 "name": "Existed_Raid", 00:17:25.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.145 "strip_size_kb": 0, 00:17:25.145 "state": "configuring", 00:17:25.145 "raid_level": "raid1", 00:17:25.145 "superblock": false, 00:17:25.145 "num_base_bdevs": 3, 00:17:25.145 "num_base_bdevs_discovered": 1, 00:17:25.145 "num_base_bdevs_operational": 3, 00:17:25.145 "base_bdevs_list": [ 00:17:25.145 { 00:17:25.145 "name": "BaseBdev1", 00:17:25.145 "uuid": "8865b259-d6f1-4af5-a840-cfe881354d20", 00:17:25.145 "is_configured": true, 00:17:25.145 "data_offset": 0, 00:17:25.145 "data_size": 65536 00:17:25.145 }, 00:17:25.145 { 00:17:25.145 "name": "BaseBdev2", 00:17:25.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.145 "is_configured": false, 00:17:25.145 "data_offset": 0, 00:17:25.145 "data_size": 0 00:17:25.145 }, 00:17:25.145 { 00:17:25.145 "name": "BaseBdev3", 00:17:25.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.145 "is_configured": false, 00:17:25.145 "data_offset": 0, 00:17:25.145 "data_size": 0 00:17:25.145 } 00:17:25.145 ] 00:17:25.145 }' 00:17:25.145 16:56:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:25.145 16:56:13 -- common/autotest_common.sh@10 -- # set +x 00:17:25.711 16:56:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:25.970 [2024-11-05 16:56:14.741115] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:25.970 BaseBdev2 00:17:25.970 16:56:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:25.970 16:56:14 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:25.970 16:56:14 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:25.970 16:56:14 -- common/autotest_common.sh@899 -- # local i 00:17:25.970 16:56:14 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:25.970 16:56:14 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:25.970 16:56:14 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:26.228 16:56:14 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:26.487 [ 00:17:26.487 { 00:17:26.487 "name": "BaseBdev2", 00:17:26.487 "aliases": [ 00:17:26.487 "163dce0c-8b66-430e-bbd9-4968ed74981d" 00:17:26.487 ], 00:17:26.487 "product_name": "Malloc disk", 00:17:26.487 "block_size": 512, 00:17:26.487 "num_blocks": 65536, 00:17:26.487 "uuid": "163dce0c-8b66-430e-bbd9-4968ed74981d", 00:17:26.487 "assigned_rate_limits": { 00:17:26.487 "rw_ios_per_sec": 0, 00:17:26.487 "rw_mbytes_per_sec": 0, 00:17:26.487 "r_mbytes_per_sec": 0, 00:17:26.487 "w_mbytes_per_sec": 0 00:17:26.487 }, 00:17:26.487 "claimed": true, 00:17:26.487 "claim_type": "exclusive_write", 00:17:26.487 "zoned": false, 00:17:26.487 "supported_io_types": { 00:17:26.487 "read": true, 00:17:26.487 "write": true, 00:17:26.487 "unmap": true, 00:17:26.487 "write_zeroes": true, 00:17:26.487 "flush": true, 00:17:26.487 "reset": true, 00:17:26.487 "compare": false, 00:17:26.487 "compare_and_write": false, 00:17:26.487 "abort": true, 00:17:26.487 "nvme_admin": false, 00:17:26.487 "nvme_io": false 00:17:26.487 }, 00:17:26.487 "memory_domains": [ 00:17:26.487 { 00:17:26.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.487 "dma_device_type": 2 00:17:26.487 } 00:17:26.487 ], 00:17:26.487 "driver_specific": {} 00:17:26.487 } 00:17:26.487 ] 00:17:26.487 16:56:15 -- common/autotest_common.sh@905 -- # return 0 00:17:26.487 16:56:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:26.487 16:56:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:26.487 16:56:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:26.487 16:56:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:26.487 16:56:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:26.487 16:56:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:26.487 16:56:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:26.487 16:56:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:26.487 16:56:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.487 16:56:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.487 16:56:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.487 16:56:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.487 16:56:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.487 16:56:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.487 16:56:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:26.487 "name": "Existed_Raid", 00:17:26.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.487 "strip_size_kb": 0, 00:17:26.487 "state": "configuring", 00:17:26.487 "raid_level": "raid1", 00:17:26.487 "superblock": false, 00:17:26.487 "num_base_bdevs": 3, 00:17:26.487 "num_base_bdevs_discovered": 2, 00:17:26.487 "num_base_bdevs_operational": 3, 00:17:26.487 "base_bdevs_list": [ 00:17:26.487 { 00:17:26.487 "name": "BaseBdev1", 00:17:26.487 "uuid": "8865b259-d6f1-4af5-a840-cfe881354d20", 00:17:26.487 "is_configured": true, 00:17:26.487 "data_offset": 0, 00:17:26.487 "data_size": 65536 00:17:26.487 }, 00:17:26.487 { 00:17:26.487 "name": "BaseBdev2", 00:17:26.487 "uuid": "163dce0c-8b66-430e-bbd9-4968ed74981d", 00:17:26.487 "is_configured": true, 00:17:26.487 "data_offset": 0, 00:17:26.487 "data_size": 65536 00:17:26.487 }, 00:17:26.487 { 00:17:26.487 "name": "BaseBdev3", 00:17:26.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.487 "is_configured": false, 00:17:26.487 "data_offset": 0, 00:17:26.487 "data_size": 0 00:17:26.487 } 00:17:26.487 ] 00:17:26.487 }' 00:17:26.487 16:56:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:26.487 16:56:15 -- common/autotest_common.sh@10 -- # set +x 00:17:27.422 16:56:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:27.422 [2024-11-05 16:56:16.162929] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:27.422 [2024-11-05 16:56:16.163286] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:17:27.422 [2024-11-05 16:56:16.163330] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:27.422 [2024-11-05 16:56:16.163551] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:27.422 [2024-11-05 16:56:16.164009] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:17:27.422 [2024-11-05 16:56:16.164142] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:17:27.422 [2024-11-05 16:56:16.164506] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.422 BaseBdev3 00:17:27.422 16:56:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:27.422 16:56:16 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:27.422 16:56:16 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:27.422 16:56:16 -- common/autotest_common.sh@899 -- # local i 00:17:27.422 16:56:16 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:27.422 16:56:16 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:27.422 16:56:16 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:27.682 16:56:16 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:27.941 [ 00:17:27.941 { 00:17:27.941 "name": "BaseBdev3", 00:17:27.941 "aliases": [ 00:17:27.941 "ddf5b5f1-a564-470b-a432-268684a91f46" 00:17:27.941 ], 00:17:27.941 "product_name": "Malloc disk", 00:17:27.941 "block_size": 512, 00:17:27.941 "num_blocks": 65536, 00:17:27.941 "uuid": "ddf5b5f1-a564-470b-a432-268684a91f46", 00:17:27.941 "assigned_rate_limits": { 00:17:27.941 "rw_ios_per_sec": 0, 00:17:27.941 "rw_mbytes_per_sec": 0, 00:17:27.941 "r_mbytes_per_sec": 0, 00:17:27.941 "w_mbytes_per_sec": 0 00:17:27.941 }, 00:17:27.941 "claimed": true, 00:17:27.941 "claim_type": "exclusive_write", 00:17:27.941 "zoned": false, 00:17:27.941 "supported_io_types": { 00:17:27.941 "read": true, 00:17:27.941 "write": true, 00:17:27.941 "unmap": true, 00:17:27.941 "write_zeroes": true, 00:17:27.941 "flush": true, 00:17:27.941 "reset": true, 00:17:27.941 "compare": false, 00:17:27.941 "compare_and_write": false, 00:17:27.941 "abort": true, 00:17:27.941 "nvme_admin": false, 00:17:27.941 "nvme_io": false 00:17:27.941 }, 00:17:27.941 "memory_domains": [ 00:17:27.941 { 00:17:27.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.941 "dma_device_type": 2 00:17:27.941 } 00:17:27.941 ], 00:17:27.941 "driver_specific": {} 00:17:27.941 } 00:17:27.941 ] 00:17:27.941 16:56:16 -- common/autotest_common.sh@905 -- # return 0 00:17:27.941 16:56:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:27.941 16:56:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:27.941 16:56:16 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:27.941 16:56:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:27.941 16:56:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:27.941 16:56:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:27.941 16:56:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:27.941 16:56:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:27.941 16:56:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:27.941 16:56:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:27.941 16:56:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:27.941 16:56:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:27.941 16:56:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.941 16:56:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.199 16:56:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:28.199 "name": "Existed_Raid", 00:17:28.199 "uuid": "de0081c4-0c23-4027-b7ec-de8cf1304c36", 00:17:28.199 "strip_size_kb": 0, 00:17:28.199 "state": "online", 00:17:28.200 "raid_level": "raid1", 00:17:28.200 "superblock": false, 00:17:28.200 "num_base_bdevs": 3, 00:17:28.200 "num_base_bdevs_discovered": 3, 00:17:28.200 "num_base_bdevs_operational": 3, 00:17:28.200 "base_bdevs_list": [ 00:17:28.200 { 00:17:28.200 "name": "BaseBdev1", 00:17:28.200 "uuid": "8865b259-d6f1-4af5-a840-cfe881354d20", 00:17:28.200 "is_configured": true, 00:17:28.200 "data_offset": 0, 00:17:28.200 "data_size": 65536 00:17:28.200 }, 00:17:28.200 { 00:17:28.200 "name": "BaseBdev2", 00:17:28.200 "uuid": "163dce0c-8b66-430e-bbd9-4968ed74981d", 00:17:28.200 "is_configured": true, 00:17:28.200 "data_offset": 0, 00:17:28.200 "data_size": 65536 00:17:28.200 }, 00:17:28.200 { 00:17:28.200 "name": "BaseBdev3", 00:17:28.200 "uuid": "ddf5b5f1-a564-470b-a432-268684a91f46", 00:17:28.200 "is_configured": true, 00:17:28.200 "data_offset": 0, 00:17:28.200 "data_size": 65536 00:17:28.200 } 00:17:28.200 ] 00:17:28.200 }' 00:17:28.200 16:56:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:28.200 16:56:16 -- common/autotest_common.sh@10 -- # set +x 00:17:28.766 16:56:17 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:28.766 [2024-11-05 16:56:17.659589] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:29.105 16:56:17 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:29.105 16:56:17 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:29.105 16:56:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:29.105 16:56:17 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:29.105 16:56:17 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:29.105 16:56:17 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:29.105 16:56:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:29.105 16:56:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:29.105 16:56:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:29.105 16:56:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:29.105 16:56:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:29.105 16:56:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:29.105 16:56:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:29.105 16:56:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:29.106 16:56:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:29.106 16:56:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.106 16:56:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.364 16:56:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:29.364 "name": "Existed_Raid", 00:17:29.364 "uuid": "de0081c4-0c23-4027-b7ec-de8cf1304c36", 00:17:29.364 "strip_size_kb": 0, 00:17:29.364 "state": "online", 00:17:29.364 "raid_level": "raid1", 00:17:29.364 "superblock": false, 00:17:29.364 "num_base_bdevs": 3, 00:17:29.364 "num_base_bdevs_discovered": 2, 00:17:29.364 "num_base_bdevs_operational": 2, 00:17:29.364 "base_bdevs_list": [ 00:17:29.364 { 00:17:29.364 "name": null, 00:17:29.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.364 "is_configured": false, 00:17:29.364 "data_offset": 0, 00:17:29.364 "data_size": 65536 00:17:29.364 }, 00:17:29.364 { 00:17:29.364 "name": "BaseBdev2", 00:17:29.364 "uuid": "163dce0c-8b66-430e-bbd9-4968ed74981d", 00:17:29.364 "is_configured": true, 00:17:29.364 "data_offset": 0, 00:17:29.364 "data_size": 65536 00:17:29.364 }, 00:17:29.364 { 00:17:29.364 "name": "BaseBdev3", 00:17:29.364 "uuid": "ddf5b5f1-a564-470b-a432-268684a91f46", 00:17:29.364 "is_configured": true, 00:17:29.364 "data_offset": 0, 00:17:29.364 "data_size": 65536 00:17:29.364 } 00:17:29.364 ] 00:17:29.364 }' 00:17:29.364 16:56:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:29.364 16:56:17 -- common/autotest_common.sh@10 -- # set +x 00:17:29.931 16:56:18 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:29.931 16:56:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:29.931 16:56:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.931 16:56:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:30.189 16:56:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:30.189 16:56:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:30.189 16:56:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:30.448 [2024-11-05 16:56:19.088883] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:30.448 16:56:19 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:30.448 16:56:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:30.448 16:56:19 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:30.448 16:56:19 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.707 16:56:19 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:30.707 16:56:19 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:30.707 16:56:19 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:30.975 [2024-11-05 16:56:19.660781] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:30.975 [2024-11-05 16:56:19.661010] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.975 [2024-11-05 16:56:19.661170] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.975 [2024-11-05 16:56:19.725697] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.975 [2024-11-05 16:56:19.725984] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:17:30.975 16:56:19 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:30.975 16:56:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:30.975 16:56:19 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.975 16:56:19 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:31.235 16:56:19 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:31.235 16:56:19 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:31.235 16:56:19 -- bdev/bdev_raid.sh@287 -- # killprocess 117059 00:17:31.235 16:56:19 -- common/autotest_common.sh@936 -- # '[' -z 117059 ']' 00:17:31.235 16:56:19 -- common/autotest_common.sh@940 -- # kill -0 117059 00:17:31.235 16:56:19 -- common/autotest_common.sh@941 -- # uname 00:17:31.235 16:56:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:31.235 16:56:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117059 00:17:31.235 killing process with pid 117059 00:17:31.235 16:56:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:31.235 16:56:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:31.235 16:56:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117059' 00:17:31.235 16:56:20 -- common/autotest_common.sh@955 -- # kill 117059 00:17:31.235 [2024-11-05 16:56:20.012770] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:31.235 16:56:20 -- common/autotest_common.sh@960 -- # wait 117059 00:17:31.235 [2024-11-05 16:56:20.012871] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:32.169 ************************************ 00:17:32.169 END TEST raid_state_function_test 00:17:32.169 ************************************ 00:17:32.169 16:56:20 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:32.169 00:17:32.169 real 0m11.840s 00:17:32.169 user 0m20.749s 00:17:32.169 sys 0m1.542s 00:17:32.169 16:56:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:32.169 16:56:20 -- common/autotest_common.sh@10 -- # set +x 00:17:32.169 16:56:20 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:17:32.169 16:56:20 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:32.169 16:56:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:32.169 16:56:20 -- common/autotest_common.sh@10 -- # set +x 00:17:32.169 ************************************ 00:17:32.169 START TEST raid_state_function_test_sb 00:17:32.169 ************************************ 00:17:32.169 16:56:21 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 true 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@226 -- # raid_pid=117436 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:32.169 Process raid pid: 117436 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 117436' 00:17:32.169 16:56:21 -- bdev/bdev_raid.sh@228 -- # waitforlisten 117436 /var/tmp/spdk-raid.sock 00:17:32.169 16:56:21 -- common/autotest_common.sh@829 -- # '[' -z 117436 ']' 00:17:32.169 16:56:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:32.169 16:56:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.169 16:56:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:32.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:32.169 16:56:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.169 16:56:21 -- common/autotest_common.sh@10 -- # set +x 00:17:32.428 [2024-11-05 16:56:21.072713] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:32.428 [2024-11-05 16:56:21.073049] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.428 [2024-11-05 16:56:21.225879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.687 [2024-11-05 16:56:21.410485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.687 [2024-11-05 16:56:21.584221] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.255 16:56:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:33.255 16:56:21 -- common/autotest_common.sh@862 -- # return 0 00:17:33.255 16:56:21 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:33.513 [2024-11-05 16:56:22.184966] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:33.513 [2024-11-05 16:56:22.185397] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:33.513 [2024-11-05 16:56:22.185516] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:33.513 [2024-11-05 16:56:22.185577] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:33.513 [2024-11-05 16:56:22.185673] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:33.513 [2024-11-05 16:56:22.185754] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:33.513 16:56:22 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:33.513 16:56:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:33.513 16:56:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:33.513 16:56:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:33.513 16:56:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:33.513 16:56:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:33.513 16:56:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:33.514 16:56:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:33.514 16:56:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:33.514 16:56:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:33.514 16:56:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.514 16:56:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.772 16:56:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:33.772 "name": "Existed_Raid", 00:17:33.772 "uuid": "af528c92-d842-4035-839c-bae65efa7fb4", 00:17:33.772 "strip_size_kb": 0, 00:17:33.772 "state": "configuring", 00:17:33.772 "raid_level": "raid1", 00:17:33.772 "superblock": true, 00:17:33.772 "num_base_bdevs": 3, 00:17:33.772 "num_base_bdevs_discovered": 0, 00:17:33.772 "num_base_bdevs_operational": 3, 00:17:33.772 "base_bdevs_list": [ 00:17:33.772 { 00:17:33.772 "name": "BaseBdev1", 00:17:33.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.772 "is_configured": false, 00:17:33.772 "data_offset": 0, 00:17:33.772 "data_size": 0 00:17:33.772 }, 00:17:33.772 { 00:17:33.772 "name": "BaseBdev2", 00:17:33.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.772 "is_configured": false, 00:17:33.772 "data_offset": 0, 00:17:33.772 "data_size": 0 00:17:33.772 }, 00:17:33.772 { 00:17:33.772 "name": "BaseBdev3", 00:17:33.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.772 "is_configured": false, 00:17:33.772 "data_offset": 0, 00:17:33.772 "data_size": 0 00:17:33.772 } 00:17:33.772 ] 00:17:33.772 }' 00:17:33.772 16:56:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:33.772 16:56:22 -- common/autotest_common.sh@10 -- # set +x 00:17:34.339 16:56:23 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:34.597 [2024-11-05 16:56:23.245028] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:34.597 [2024-11-05 16:56:23.245235] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:17:34.597 16:56:23 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:34.597 [2024-11-05 16:56:23.493154] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:34.597 [2024-11-05 16:56:23.493576] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:34.597 [2024-11-05 16:56:23.493686] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:34.597 [2024-11-05 16:56:23.493754] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:34.597 [2024-11-05 16:56:23.493844] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:34.597 [2024-11-05 16:56:23.493917] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:34.856 16:56:23 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:34.856 [2024-11-05 16:56:23.722101] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.856 BaseBdev1 00:17:34.856 16:56:23 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:34.856 16:56:23 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:34.856 16:56:23 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:34.856 16:56:23 -- common/autotest_common.sh@899 -- # local i 00:17:34.856 16:56:23 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:34.856 16:56:23 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:34.856 16:56:23 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:35.114 16:56:23 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:35.372 [ 00:17:35.372 { 00:17:35.372 "name": "BaseBdev1", 00:17:35.372 "aliases": [ 00:17:35.372 "0a80b195-dc77-4223-b10b-5cc3f35fd8e5" 00:17:35.372 ], 00:17:35.372 "product_name": "Malloc disk", 00:17:35.372 "block_size": 512, 00:17:35.372 "num_blocks": 65536, 00:17:35.372 "uuid": "0a80b195-dc77-4223-b10b-5cc3f35fd8e5", 00:17:35.372 "assigned_rate_limits": { 00:17:35.372 "rw_ios_per_sec": 0, 00:17:35.372 "rw_mbytes_per_sec": 0, 00:17:35.372 "r_mbytes_per_sec": 0, 00:17:35.372 "w_mbytes_per_sec": 0 00:17:35.372 }, 00:17:35.372 "claimed": true, 00:17:35.372 "claim_type": "exclusive_write", 00:17:35.372 "zoned": false, 00:17:35.372 "supported_io_types": { 00:17:35.372 "read": true, 00:17:35.372 "write": true, 00:17:35.372 "unmap": true, 00:17:35.372 "write_zeroes": true, 00:17:35.372 "flush": true, 00:17:35.372 "reset": true, 00:17:35.372 "compare": false, 00:17:35.372 "compare_and_write": false, 00:17:35.372 "abort": true, 00:17:35.372 "nvme_admin": false, 00:17:35.372 "nvme_io": false 00:17:35.372 }, 00:17:35.372 "memory_domains": [ 00:17:35.372 { 00:17:35.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.372 "dma_device_type": 2 00:17:35.372 } 00:17:35.372 ], 00:17:35.372 "driver_specific": {} 00:17:35.372 } 00:17:35.372 ] 00:17:35.372 16:56:24 -- common/autotest_common.sh@905 -- # return 0 00:17:35.372 16:56:24 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:35.372 16:56:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:35.372 16:56:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:35.372 16:56:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:35.372 16:56:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:35.372 16:56:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:35.372 16:56:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:35.372 16:56:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:35.372 16:56:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:35.372 16:56:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:35.372 16:56:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.372 16:56:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.631 16:56:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:35.631 "name": "Existed_Raid", 00:17:35.631 "uuid": "d2db1f29-a436-48c3-8aa8-b9b6fb4ba24b", 00:17:35.631 "strip_size_kb": 0, 00:17:35.631 "state": "configuring", 00:17:35.631 "raid_level": "raid1", 00:17:35.631 "superblock": true, 00:17:35.631 "num_base_bdevs": 3, 00:17:35.631 "num_base_bdevs_discovered": 1, 00:17:35.631 "num_base_bdevs_operational": 3, 00:17:35.631 "base_bdevs_list": [ 00:17:35.631 { 00:17:35.631 "name": "BaseBdev1", 00:17:35.631 "uuid": "0a80b195-dc77-4223-b10b-5cc3f35fd8e5", 00:17:35.631 "is_configured": true, 00:17:35.631 "data_offset": 2048, 00:17:35.631 "data_size": 63488 00:17:35.631 }, 00:17:35.631 { 00:17:35.631 "name": "BaseBdev2", 00:17:35.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.631 "is_configured": false, 00:17:35.631 "data_offset": 0, 00:17:35.631 "data_size": 0 00:17:35.631 }, 00:17:35.631 { 00:17:35.631 "name": "BaseBdev3", 00:17:35.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.631 "is_configured": false, 00:17:35.631 "data_offset": 0, 00:17:35.631 "data_size": 0 00:17:35.631 } 00:17:35.631 ] 00:17:35.631 }' 00:17:35.631 16:56:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:35.631 16:56:24 -- common/autotest_common.sh@10 -- # set +x 00:17:36.198 16:56:25 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:36.456 [2024-11-05 16:56:25.274522] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:36.456 [2024-11-05 16:56:25.274744] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:36.456 16:56:25 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:36.456 16:56:25 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:36.726 16:56:25 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:36.999 BaseBdev1 00:17:36.999 16:56:25 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:36.999 16:56:25 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:36.999 16:56:25 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:36.999 16:56:25 -- common/autotest_common.sh@899 -- # local i 00:17:36.999 16:56:25 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:36.999 16:56:25 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:36.999 16:56:25 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:37.257 16:56:26 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:37.515 [ 00:17:37.515 { 00:17:37.515 "name": "BaseBdev1", 00:17:37.515 "aliases": [ 00:17:37.515 "80e7ac5e-8f12-42db-8197-ae1a4e9108ce" 00:17:37.515 ], 00:17:37.515 "product_name": "Malloc disk", 00:17:37.515 "block_size": 512, 00:17:37.515 "num_blocks": 65536, 00:17:37.515 "uuid": "80e7ac5e-8f12-42db-8197-ae1a4e9108ce", 00:17:37.515 "assigned_rate_limits": { 00:17:37.515 "rw_ios_per_sec": 0, 00:17:37.515 "rw_mbytes_per_sec": 0, 00:17:37.515 "r_mbytes_per_sec": 0, 00:17:37.515 "w_mbytes_per_sec": 0 00:17:37.515 }, 00:17:37.515 "claimed": false, 00:17:37.515 "zoned": false, 00:17:37.515 "supported_io_types": { 00:17:37.515 "read": true, 00:17:37.515 "write": true, 00:17:37.515 "unmap": true, 00:17:37.515 "write_zeroes": true, 00:17:37.515 "flush": true, 00:17:37.515 "reset": true, 00:17:37.515 "compare": false, 00:17:37.515 "compare_and_write": false, 00:17:37.515 "abort": true, 00:17:37.515 "nvme_admin": false, 00:17:37.515 "nvme_io": false 00:17:37.515 }, 00:17:37.515 "memory_domains": [ 00:17:37.515 { 00:17:37.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.515 "dma_device_type": 2 00:17:37.515 } 00:17:37.515 ], 00:17:37.515 "driver_specific": {} 00:17:37.515 } 00:17:37.515 ] 00:17:37.515 16:56:26 -- common/autotest_common.sh@905 -- # return 0 00:17:37.515 16:56:26 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:37.773 [2024-11-05 16:56:26.514544] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:37.773 [2024-11-05 16:56:26.516672] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.773 [2024-11-05 16:56:26.516883] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.773 [2024-11-05 16:56:26.517013] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:37.773 [2024-11-05 16:56:26.517079] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:37.773 16:56:26 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:37.773 16:56:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:37.773 16:56:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:37.773 16:56:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:37.773 16:56:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:37.773 16:56:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:37.773 16:56:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:37.773 16:56:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:37.773 16:56:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:37.773 16:56:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:37.773 16:56:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:37.773 16:56:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:37.773 16:56:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.773 16:56:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.032 16:56:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:38.032 "name": "Existed_Raid", 00:17:38.032 "uuid": "3e1ec448-c18a-40df-bdf2-e015de90e71d", 00:17:38.032 "strip_size_kb": 0, 00:17:38.032 "state": "configuring", 00:17:38.032 "raid_level": "raid1", 00:17:38.032 "superblock": true, 00:17:38.032 "num_base_bdevs": 3, 00:17:38.032 "num_base_bdevs_discovered": 1, 00:17:38.032 "num_base_bdevs_operational": 3, 00:17:38.032 "base_bdevs_list": [ 00:17:38.032 { 00:17:38.032 "name": "BaseBdev1", 00:17:38.032 "uuid": "80e7ac5e-8f12-42db-8197-ae1a4e9108ce", 00:17:38.032 "is_configured": true, 00:17:38.032 "data_offset": 2048, 00:17:38.032 "data_size": 63488 00:17:38.032 }, 00:17:38.032 { 00:17:38.032 "name": "BaseBdev2", 00:17:38.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.032 "is_configured": false, 00:17:38.032 "data_offset": 0, 00:17:38.032 "data_size": 0 00:17:38.032 }, 00:17:38.032 { 00:17:38.032 "name": "BaseBdev3", 00:17:38.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.032 "is_configured": false, 00:17:38.032 "data_offset": 0, 00:17:38.032 "data_size": 0 00:17:38.032 } 00:17:38.032 ] 00:17:38.032 }' 00:17:38.032 16:56:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:38.032 16:56:26 -- common/autotest_common.sh@10 -- # set +x 00:17:38.599 16:56:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:38.857 [2024-11-05 16:56:27.657351] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:38.857 BaseBdev2 00:17:38.857 16:56:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:38.857 16:56:27 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:38.857 16:56:27 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:38.857 16:56:27 -- common/autotest_common.sh@899 -- # local i 00:17:38.857 16:56:27 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:38.857 16:56:27 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:38.857 16:56:27 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:39.114 16:56:27 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:39.372 [ 00:17:39.372 { 00:17:39.372 "name": "BaseBdev2", 00:17:39.372 "aliases": [ 00:17:39.372 "e41deb92-c7a5-4ed8-be2c-f7dc9a545201" 00:17:39.372 ], 00:17:39.372 "product_name": "Malloc disk", 00:17:39.372 "block_size": 512, 00:17:39.372 "num_blocks": 65536, 00:17:39.372 "uuid": "e41deb92-c7a5-4ed8-be2c-f7dc9a545201", 00:17:39.372 "assigned_rate_limits": { 00:17:39.372 "rw_ios_per_sec": 0, 00:17:39.372 "rw_mbytes_per_sec": 0, 00:17:39.372 "r_mbytes_per_sec": 0, 00:17:39.372 "w_mbytes_per_sec": 0 00:17:39.372 }, 00:17:39.372 "claimed": true, 00:17:39.372 "claim_type": "exclusive_write", 00:17:39.372 "zoned": false, 00:17:39.372 "supported_io_types": { 00:17:39.372 "read": true, 00:17:39.372 "write": true, 00:17:39.372 "unmap": true, 00:17:39.372 "write_zeroes": true, 00:17:39.372 "flush": true, 00:17:39.372 "reset": true, 00:17:39.372 "compare": false, 00:17:39.372 "compare_and_write": false, 00:17:39.372 "abort": true, 00:17:39.372 "nvme_admin": false, 00:17:39.372 "nvme_io": false 00:17:39.372 }, 00:17:39.372 "memory_domains": [ 00:17:39.372 { 00:17:39.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.372 "dma_device_type": 2 00:17:39.372 } 00:17:39.372 ], 00:17:39.372 "driver_specific": {} 00:17:39.372 } 00:17:39.372 ] 00:17:39.372 16:56:28 -- common/autotest_common.sh@905 -- # return 0 00:17:39.372 16:56:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:39.372 16:56:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:39.372 16:56:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:39.372 16:56:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:39.372 16:56:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:39.372 16:56:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:39.372 16:56:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:39.372 16:56:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:39.372 16:56:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:39.372 16:56:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:39.372 16:56:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:39.372 16:56:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:39.372 16:56:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.372 16:56:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.630 16:56:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:39.630 "name": "Existed_Raid", 00:17:39.630 "uuid": "3e1ec448-c18a-40df-bdf2-e015de90e71d", 00:17:39.630 "strip_size_kb": 0, 00:17:39.630 "state": "configuring", 00:17:39.630 "raid_level": "raid1", 00:17:39.630 "superblock": true, 00:17:39.630 "num_base_bdevs": 3, 00:17:39.630 "num_base_bdevs_discovered": 2, 00:17:39.630 "num_base_bdevs_operational": 3, 00:17:39.630 "base_bdevs_list": [ 00:17:39.630 { 00:17:39.630 "name": "BaseBdev1", 00:17:39.630 "uuid": "80e7ac5e-8f12-42db-8197-ae1a4e9108ce", 00:17:39.630 "is_configured": true, 00:17:39.630 "data_offset": 2048, 00:17:39.630 "data_size": 63488 00:17:39.630 }, 00:17:39.630 { 00:17:39.630 "name": "BaseBdev2", 00:17:39.630 "uuid": "e41deb92-c7a5-4ed8-be2c-f7dc9a545201", 00:17:39.630 "is_configured": true, 00:17:39.630 "data_offset": 2048, 00:17:39.630 "data_size": 63488 00:17:39.630 }, 00:17:39.630 { 00:17:39.630 "name": "BaseBdev3", 00:17:39.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.631 "is_configured": false, 00:17:39.631 "data_offset": 0, 00:17:39.631 "data_size": 0 00:17:39.631 } 00:17:39.631 ] 00:17:39.631 }' 00:17:39.631 16:56:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:39.631 16:56:28 -- common/autotest_common.sh@10 -- # set +x 00:17:40.197 16:56:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:40.456 [2024-11-05 16:56:29.318144] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:40.456 [2024-11-05 16:56:29.318665] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:17:40.456 [2024-11-05 16:56:29.318789] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:40.456 [2024-11-05 16:56:29.319007] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:40.456 BaseBdev3 00:17:40.456 [2024-11-05 16:56:29.319497] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:17:40.456 [2024-11-05 16:56:29.319513] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:17:40.456 [2024-11-05 16:56:29.319670] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.456 16:56:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:40.456 16:56:29 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:40.456 16:56:29 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:40.456 16:56:29 -- common/autotest_common.sh@899 -- # local i 00:17:40.456 16:56:29 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:40.456 16:56:29 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:40.456 16:56:29 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:40.714 16:56:29 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:40.972 [ 00:17:40.972 { 00:17:40.972 "name": "BaseBdev3", 00:17:40.972 "aliases": [ 00:17:40.972 "f0de33b9-8634-4738-a9d8-f3d77e0350b7" 00:17:40.972 ], 00:17:40.972 "product_name": "Malloc disk", 00:17:40.972 "block_size": 512, 00:17:40.972 "num_blocks": 65536, 00:17:40.972 "uuid": "f0de33b9-8634-4738-a9d8-f3d77e0350b7", 00:17:40.972 "assigned_rate_limits": { 00:17:40.972 "rw_ios_per_sec": 0, 00:17:40.972 "rw_mbytes_per_sec": 0, 00:17:40.972 "r_mbytes_per_sec": 0, 00:17:40.972 "w_mbytes_per_sec": 0 00:17:40.972 }, 00:17:40.972 "claimed": true, 00:17:40.972 "claim_type": "exclusive_write", 00:17:40.972 "zoned": false, 00:17:40.972 "supported_io_types": { 00:17:40.972 "read": true, 00:17:40.972 "write": true, 00:17:40.972 "unmap": true, 00:17:40.972 "write_zeroes": true, 00:17:40.972 "flush": true, 00:17:40.972 "reset": true, 00:17:40.972 "compare": false, 00:17:40.972 "compare_and_write": false, 00:17:40.972 "abort": true, 00:17:40.972 "nvme_admin": false, 00:17:40.972 "nvme_io": false 00:17:40.972 }, 00:17:40.972 "memory_domains": [ 00:17:40.972 { 00:17:40.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.972 "dma_device_type": 2 00:17:40.972 } 00:17:40.972 ], 00:17:40.972 "driver_specific": {} 00:17:40.972 } 00:17:40.972 ] 00:17:40.972 16:56:29 -- common/autotest_common.sh@905 -- # return 0 00:17:40.972 16:56:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:40.972 16:56:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:40.972 16:56:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:40.972 16:56:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:40.972 16:56:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:40.972 16:56:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:40.972 16:56:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:40.972 16:56:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:40.972 16:56:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.972 16:56:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.973 16:56:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.973 16:56:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.973 16:56:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.973 16:56:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.234 16:56:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:41.234 "name": "Existed_Raid", 00:17:41.234 "uuid": "3e1ec448-c18a-40df-bdf2-e015de90e71d", 00:17:41.234 "strip_size_kb": 0, 00:17:41.234 "state": "online", 00:17:41.234 "raid_level": "raid1", 00:17:41.234 "superblock": true, 00:17:41.234 "num_base_bdevs": 3, 00:17:41.234 "num_base_bdevs_discovered": 3, 00:17:41.234 "num_base_bdevs_operational": 3, 00:17:41.234 "base_bdevs_list": [ 00:17:41.234 { 00:17:41.234 "name": "BaseBdev1", 00:17:41.234 "uuid": "80e7ac5e-8f12-42db-8197-ae1a4e9108ce", 00:17:41.234 "is_configured": true, 00:17:41.234 "data_offset": 2048, 00:17:41.234 "data_size": 63488 00:17:41.234 }, 00:17:41.234 { 00:17:41.234 "name": "BaseBdev2", 00:17:41.234 "uuid": "e41deb92-c7a5-4ed8-be2c-f7dc9a545201", 00:17:41.234 "is_configured": true, 00:17:41.234 "data_offset": 2048, 00:17:41.234 "data_size": 63488 00:17:41.234 }, 00:17:41.234 { 00:17:41.234 "name": "BaseBdev3", 00:17:41.234 "uuid": "f0de33b9-8634-4738-a9d8-f3d77e0350b7", 00:17:41.234 "is_configured": true, 00:17:41.234 "data_offset": 2048, 00:17:41.234 "data_size": 63488 00:17:41.234 } 00:17:41.234 ] 00:17:41.234 }' 00:17:41.234 16:56:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:41.234 16:56:30 -- common/autotest_common.sh@10 -- # set +x 00:17:41.801 16:56:30 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:42.060 [2024-11-05 16:56:30.835465] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:42.060 16:56:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:42.060 16:56:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:42.060 16:56:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:42.060 16:56:30 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:42.060 16:56:30 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:42.060 16:56:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:42.060 16:56:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:42.060 16:56:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:42.060 16:56:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:42.060 16:56:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:42.060 16:56:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:42.060 16:56:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.060 16:56:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.060 16:56:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.060 16:56:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.060 16:56:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.060 16:56:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.319 16:56:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:42.319 "name": "Existed_Raid", 00:17:42.319 "uuid": "3e1ec448-c18a-40df-bdf2-e015de90e71d", 00:17:42.319 "strip_size_kb": 0, 00:17:42.319 "state": "online", 00:17:42.319 "raid_level": "raid1", 00:17:42.319 "superblock": true, 00:17:42.319 "num_base_bdevs": 3, 00:17:42.319 "num_base_bdevs_discovered": 2, 00:17:42.319 "num_base_bdevs_operational": 2, 00:17:42.319 "base_bdevs_list": [ 00:17:42.319 { 00:17:42.319 "name": null, 00:17:42.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.319 "is_configured": false, 00:17:42.319 "data_offset": 2048, 00:17:42.319 "data_size": 63488 00:17:42.319 }, 00:17:42.319 { 00:17:42.319 "name": "BaseBdev2", 00:17:42.319 "uuid": "e41deb92-c7a5-4ed8-be2c-f7dc9a545201", 00:17:42.319 "is_configured": true, 00:17:42.319 "data_offset": 2048, 00:17:42.319 "data_size": 63488 00:17:42.319 }, 00:17:42.319 { 00:17:42.319 "name": "BaseBdev3", 00:17:42.319 "uuid": "f0de33b9-8634-4738-a9d8-f3d77e0350b7", 00:17:42.319 "is_configured": true, 00:17:42.319 "data_offset": 2048, 00:17:42.319 "data_size": 63488 00:17:42.319 } 00:17:42.319 ] 00:17:42.319 }' 00:17:42.319 16:56:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:42.319 16:56:31 -- common/autotest_common.sh@10 -- # set +x 00:17:42.899 16:56:31 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:42.899 16:56:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:42.899 16:56:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.899 16:56:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:43.160 16:56:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:43.160 16:56:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:43.160 16:56:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:43.418 [2024-11-05 16:56:32.288358] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:43.677 16:56:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:43.677 16:56:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:43.677 16:56:32 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.677 16:56:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:43.935 16:56:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:43.935 16:56:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:43.935 16:56:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:43.935 [2024-11-05 16:56:32.832667] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:43.935 [2024-11-05 16:56:32.832913] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.935 [2024-11-05 16:56:32.833201] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.194 [2024-11-05 16:56:32.908401] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.194 [2024-11-05 16:56:32.908655] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:17:44.194 16:56:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:44.194 16:56:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:44.194 16:56:32 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:44.194 16:56:32 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.452 16:56:33 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:44.452 16:56:33 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:44.452 16:56:33 -- bdev/bdev_raid.sh@287 -- # killprocess 117436 00:17:44.452 16:56:33 -- common/autotest_common.sh@936 -- # '[' -z 117436 ']' 00:17:44.452 16:56:33 -- common/autotest_common.sh@940 -- # kill -0 117436 00:17:44.452 16:56:33 -- common/autotest_common.sh@941 -- # uname 00:17:44.452 16:56:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:44.452 16:56:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117436 00:17:44.452 killing process with pid 117436 00:17:44.452 16:56:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:44.452 16:56:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:44.452 16:56:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117436' 00:17:44.452 16:56:33 -- common/autotest_common.sh@955 -- # kill 117436 00:17:44.452 16:56:33 -- common/autotest_common.sh@960 -- # wait 117436 00:17:44.452 [2024-11-05 16:56:33.205796] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:44.452 [2024-11-05 16:56:33.206021] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:45.386 16:56:34 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:45.386 00:17:45.386 real 0m13.224s 00:17:45.386 user 0m23.409s 00:17:45.386 sys 0m1.510s 00:17:45.386 16:56:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:45.386 ************************************ 00:17:45.386 END TEST raid_state_function_test_sb 00:17:45.386 ************************************ 00:17:45.386 16:56:34 -- common/autotest_common.sh@10 -- # set +x 00:17:45.386 16:56:34 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:17:45.386 16:56:34 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:45.386 16:56:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:45.386 16:56:34 -- common/autotest_common.sh@10 -- # set +x 00:17:45.644 ************************************ 00:17:45.644 START TEST raid_superblock_test 00:17:45.644 ************************************ 00:17:45.644 16:56:34 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 3 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@357 -- # raid_pid=117834 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:45.644 16:56:34 -- bdev/bdev_raid.sh@358 -- # waitforlisten 117834 /var/tmp/spdk-raid.sock 00:17:45.645 16:56:34 -- common/autotest_common.sh@829 -- # '[' -z 117834 ']' 00:17:45.645 16:56:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:45.645 16:56:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:45.645 16:56:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:45.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:45.645 16:56:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:45.645 16:56:34 -- common/autotest_common.sh@10 -- # set +x 00:17:45.645 [2024-11-05 16:56:34.348458] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:45.645 [2024-11-05 16:56:34.348969] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117834 ] 00:17:45.645 [2024-11-05 16:56:34.508574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.902 [2024-11-05 16:56:34.691057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.161 [2024-11-05 16:56:34.867501] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.728 16:56:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.728 16:56:35 -- common/autotest_common.sh@862 -- # return 0 00:17:46.728 16:56:35 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:46.728 16:56:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:46.728 16:56:35 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:46.728 16:56:35 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:46.728 16:56:35 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:46.728 16:56:35 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:46.728 16:56:35 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:46.728 16:56:35 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:46.728 16:56:35 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:46.728 malloc1 00:17:46.728 16:56:35 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:46.986 [2024-11-05 16:56:35.802057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:46.986 [2024-11-05 16:56:35.802334] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.986 [2024-11-05 16:56:35.802480] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:17:46.986 [2024-11-05 16:56:35.802636] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.986 [2024-11-05 16:56:35.805049] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.986 [2024-11-05 16:56:35.805256] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:46.986 pt1 00:17:46.986 16:56:35 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:46.986 16:56:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:46.986 16:56:35 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:46.986 16:56:35 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:46.986 16:56:35 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:46.986 16:56:35 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:46.986 16:56:35 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:46.986 16:56:35 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:46.986 16:56:35 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:47.244 malloc2 00:17:47.244 16:56:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:47.503 [2024-11-05 16:56:36.278470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:47.503 [2024-11-05 16:56:36.278728] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.503 [2024-11-05 16:56:36.278903] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:47.503 [2024-11-05 16:56:36.279055] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.503 [2024-11-05 16:56:36.281519] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.503 [2024-11-05 16:56:36.281704] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:47.503 pt2 00:17:47.503 16:56:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:47.503 16:56:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:47.503 16:56:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:47.503 16:56:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:47.503 16:56:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:47.503 16:56:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.503 16:56:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.503 16:56:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.503 16:56:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:47.761 malloc3 00:17:47.761 16:56:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:48.020 [2024-11-05 16:56:36.792005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:48.020 [2024-11-05 16:56:36.792264] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.020 [2024-11-05 16:56:36.792431] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:48.020 [2024-11-05 16:56:36.792570] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.020 [2024-11-05 16:56:36.794774] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.020 [2024-11-05 16:56:36.794994] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:48.020 pt3 00:17:48.020 16:56:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:48.020 16:56:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:48.020 16:56:36 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:48.279 [2024-11-05 16:56:36.988074] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:48.279 [2024-11-05 16:56:36.989999] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:48.279 [2024-11-05 16:56:36.990213] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:48.279 [2024-11-05 16:56:36.990521] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:17:48.279 [2024-11-05 16:56:36.990669] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:48.279 [2024-11-05 16:56:36.990894] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:48.279 [2024-11-05 16:56:36.991369] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:17:48.279 [2024-11-05 16:56:36.991559] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:17:48.279 [2024-11-05 16:56:36.991813] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.279 16:56:36 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:48.279 16:56:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:48.279 16:56:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:48.279 16:56:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:48.279 16:56:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:48.279 16:56:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:48.279 16:56:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.279 16:56:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.279 16:56:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.279 16:56:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.279 16:56:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.279 16:56:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.537 16:56:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.537 "name": "raid_bdev1", 00:17:48.537 "uuid": "fe5cc0e3-9bed-4d3e-8c13-095196a02488", 00:17:48.537 "strip_size_kb": 0, 00:17:48.537 "state": "online", 00:17:48.537 "raid_level": "raid1", 00:17:48.537 "superblock": true, 00:17:48.537 "num_base_bdevs": 3, 00:17:48.537 "num_base_bdevs_discovered": 3, 00:17:48.537 "num_base_bdevs_operational": 3, 00:17:48.537 "base_bdevs_list": [ 00:17:48.537 { 00:17:48.537 "name": "pt1", 00:17:48.537 "uuid": "89d49d21-3281-5daf-ba57-22319c345277", 00:17:48.537 "is_configured": true, 00:17:48.537 "data_offset": 2048, 00:17:48.537 "data_size": 63488 00:17:48.537 }, 00:17:48.537 { 00:17:48.537 "name": "pt2", 00:17:48.537 "uuid": "757d5cff-bec1-591e-aaf9-4d4e819423dd", 00:17:48.537 "is_configured": true, 00:17:48.537 "data_offset": 2048, 00:17:48.537 "data_size": 63488 00:17:48.537 }, 00:17:48.537 { 00:17:48.537 "name": "pt3", 00:17:48.537 "uuid": "87d17db3-4020-5ec5-8ab3-3c67e5785806", 00:17:48.537 "is_configured": true, 00:17:48.537 "data_offset": 2048, 00:17:48.537 "data_size": 63488 00:17:48.537 } 00:17:48.537 ] 00:17:48.537 }' 00:17:48.537 16:56:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.537 16:56:37 -- common/autotest_common.sh@10 -- # set +x 00:17:49.114 16:56:37 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:49.114 16:56:37 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:49.372 [2024-11-05 16:56:38.076478] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.372 16:56:38 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=fe5cc0e3-9bed-4d3e-8c13-095196a02488 00:17:49.372 16:56:38 -- bdev/bdev_raid.sh@380 -- # '[' -z fe5cc0e3-9bed-4d3e-8c13-095196a02488 ']' 00:17:49.372 16:56:38 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:49.630 [2024-11-05 16:56:38.324354] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:49.630 [2024-11-05 16:56:38.324526] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.630 [2024-11-05 16:56:38.324702] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.630 [2024-11-05 16:56:38.324879] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.630 [2024-11-05 16:56:38.324983] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:17:49.630 16:56:38 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.630 16:56:38 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:49.887 16:56:38 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:49.887 16:56:38 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:49.887 16:56:38 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:49.887 16:56:38 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:50.145 16:56:38 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:50.145 16:56:38 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:50.145 16:56:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:50.145 16:56:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:50.402 16:56:39 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:50.402 16:56:39 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:50.660 16:56:39 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:50.660 16:56:39 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:50.660 16:56:39 -- common/autotest_common.sh@650 -- # local es=0 00:17:50.660 16:56:39 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:50.660 16:56:39 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.660 16:56:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.660 16:56:39 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.660 16:56:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.660 16:56:39 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.660 16:56:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.660 16:56:39 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.660 16:56:39 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:50.660 16:56:39 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:50.923 [2024-11-05 16:56:39.660618] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:50.923 [2024-11-05 16:56:39.662678] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:50.923 [2024-11-05 16:56:39.662935] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:50.923 [2024-11-05 16:56:39.663107] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:50.923 [2024-11-05 16:56:39.663300] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:50.923 [2024-11-05 16:56:39.663453] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:50.923 [2024-11-05 16:56:39.663624] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.923 [2024-11-05 16:56:39.663674] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:17:50.923 request: 00:17:50.923 { 00:17:50.923 "name": "raid_bdev1", 00:17:50.923 "raid_level": "raid1", 00:17:50.923 "base_bdevs": [ 00:17:50.923 "malloc1", 00:17:50.923 "malloc2", 00:17:50.923 "malloc3" 00:17:50.923 ], 00:17:50.923 "superblock": false, 00:17:50.923 "method": "bdev_raid_create", 00:17:50.923 "req_id": 1 00:17:50.923 } 00:17:50.923 Got JSON-RPC error response 00:17:50.923 response: 00:17:50.923 { 00:17:50.923 "code": -17, 00:17:50.923 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:50.923 } 00:17:50.923 16:56:39 -- common/autotest_common.sh@653 -- # es=1 00:17:50.923 16:56:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.923 16:56:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.923 16:56:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.923 16:56:39 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:50.923 16:56:39 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.188 16:56:39 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:51.188 16:56:39 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:51.188 16:56:39 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:51.188 [2024-11-05 16:56:40.076695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:51.188 [2024-11-05 16:56:40.076934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.188 [2024-11-05 16:56:40.077066] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:51.188 [2024-11-05 16:56:40.077193] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.189 [2024-11-05 16:56:40.079816] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.189 [2024-11-05 16:56:40.080015] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:51.189 [2024-11-05 16:56:40.080287] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:51.189 [2024-11-05 16:56:40.080441] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:51.189 pt1 00:17:51.446 16:56:40 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:51.446 16:56:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:51.446 16:56:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:51.446 16:56:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:51.446 16:56:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:51.446 16:56:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:51.446 16:56:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:51.446 16:56:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:51.446 16:56:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:51.446 16:56:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:51.446 16:56:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.446 16:56:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.446 16:56:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:51.446 "name": "raid_bdev1", 00:17:51.446 "uuid": "fe5cc0e3-9bed-4d3e-8c13-095196a02488", 00:17:51.446 "strip_size_kb": 0, 00:17:51.446 "state": "configuring", 00:17:51.446 "raid_level": "raid1", 00:17:51.446 "superblock": true, 00:17:51.446 "num_base_bdevs": 3, 00:17:51.446 "num_base_bdevs_discovered": 1, 00:17:51.446 "num_base_bdevs_operational": 3, 00:17:51.446 "base_bdevs_list": [ 00:17:51.446 { 00:17:51.446 "name": "pt1", 00:17:51.446 "uuid": "89d49d21-3281-5daf-ba57-22319c345277", 00:17:51.446 "is_configured": true, 00:17:51.446 "data_offset": 2048, 00:17:51.446 "data_size": 63488 00:17:51.446 }, 00:17:51.446 { 00:17:51.446 "name": null, 00:17:51.446 "uuid": "757d5cff-bec1-591e-aaf9-4d4e819423dd", 00:17:51.446 "is_configured": false, 00:17:51.446 "data_offset": 2048, 00:17:51.446 "data_size": 63488 00:17:51.446 }, 00:17:51.446 { 00:17:51.446 "name": null, 00:17:51.446 "uuid": "87d17db3-4020-5ec5-8ab3-3c67e5785806", 00:17:51.446 "is_configured": false, 00:17:51.446 "data_offset": 2048, 00:17:51.446 "data_size": 63488 00:17:51.446 } 00:17:51.446 ] 00:17:51.446 }' 00:17:51.446 16:56:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:51.446 16:56:40 -- common/autotest_common.sh@10 -- # set +x 00:17:52.381 16:56:40 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:52.381 16:56:40 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:52.381 [2024-11-05 16:56:41.136991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:52.381 [2024-11-05 16:56:41.137406] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.381 [2024-11-05 16:56:41.137589] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:52.381 [2024-11-05 16:56:41.137711] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.381 [2024-11-05 16:56:41.138324] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.381 [2024-11-05 16:56:41.138492] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:52.381 [2024-11-05 16:56:41.138713] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:52.381 [2024-11-05 16:56:41.138830] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:52.381 pt2 00:17:52.381 16:56:41 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:52.640 [2024-11-05 16:56:41.401096] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:52.640 16:56:41 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:52.640 16:56:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:52.640 16:56:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:52.640 16:56:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:52.640 16:56:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:52.640 16:56:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:52.640 16:56:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:52.640 16:56:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:52.640 16:56:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:52.640 16:56:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:52.640 16:56:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.640 16:56:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.898 16:56:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:52.898 "name": "raid_bdev1", 00:17:52.898 "uuid": "fe5cc0e3-9bed-4d3e-8c13-095196a02488", 00:17:52.898 "strip_size_kb": 0, 00:17:52.898 "state": "configuring", 00:17:52.898 "raid_level": "raid1", 00:17:52.898 "superblock": true, 00:17:52.898 "num_base_bdevs": 3, 00:17:52.898 "num_base_bdevs_discovered": 1, 00:17:52.898 "num_base_bdevs_operational": 3, 00:17:52.898 "base_bdevs_list": [ 00:17:52.898 { 00:17:52.898 "name": "pt1", 00:17:52.898 "uuid": "89d49d21-3281-5daf-ba57-22319c345277", 00:17:52.898 "is_configured": true, 00:17:52.898 "data_offset": 2048, 00:17:52.898 "data_size": 63488 00:17:52.898 }, 00:17:52.898 { 00:17:52.898 "name": null, 00:17:52.898 "uuid": "757d5cff-bec1-591e-aaf9-4d4e819423dd", 00:17:52.898 "is_configured": false, 00:17:52.898 "data_offset": 2048, 00:17:52.898 "data_size": 63488 00:17:52.898 }, 00:17:52.898 { 00:17:52.898 "name": null, 00:17:52.898 "uuid": "87d17db3-4020-5ec5-8ab3-3c67e5785806", 00:17:52.898 "is_configured": false, 00:17:52.898 "data_offset": 2048, 00:17:52.898 "data_size": 63488 00:17:52.898 } 00:17:52.898 ] 00:17:52.898 }' 00:17:52.898 16:56:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:52.898 16:56:41 -- common/autotest_common.sh@10 -- # set +x 00:17:53.464 16:56:42 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:53.464 16:56:42 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:53.464 16:56:42 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.724 [2024-11-05 16:56:42.553276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.724 [2024-11-05 16:56:42.553724] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.724 [2024-11-05 16:56:42.553904] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:53.724 [2024-11-05 16:56:42.554062] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.724 [2024-11-05 16:56:42.554683] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.724 [2024-11-05 16:56:42.554896] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.724 [2024-11-05 16:56:42.555133] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:53.724 [2024-11-05 16:56:42.555302] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.724 pt2 00:17:53.724 16:56:42 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:53.724 16:56:42 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:53.724 16:56:42 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:53.983 [2024-11-05 16:56:42.809360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:53.983 [2024-11-05 16:56:42.809714] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.983 [2024-11-05 16:56:42.809865] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:53.983 [2024-11-05 16:56:42.809987] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.983 [2024-11-05 16:56:42.810603] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.983 [2024-11-05 16:56:42.810772] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:53.983 [2024-11-05 16:56:42.811018] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:53.983 [2024-11-05 16:56:42.811140] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:53.983 [2024-11-05 16:56:42.811404] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:17:53.983 [2024-11-05 16:56:42.811562] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:53.983 [2024-11-05 16:56:42.811807] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:53.983 [2024-11-05 16:56:42.812300] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:17:53.983 [2024-11-05 16:56:42.812428] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:17:53.983 [2024-11-05 16:56:42.812656] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.983 pt3 00:17:53.983 16:56:42 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:53.983 16:56:42 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:53.983 16:56:42 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:53.983 16:56:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:53.983 16:56:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:53.983 16:56:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:53.983 16:56:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:53.983 16:56:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:53.983 16:56:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:53.983 16:56:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:53.983 16:56:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:53.983 16:56:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:53.983 16:56:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.983 16:56:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.241 16:56:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:54.242 "name": "raid_bdev1", 00:17:54.242 "uuid": "fe5cc0e3-9bed-4d3e-8c13-095196a02488", 00:17:54.242 "strip_size_kb": 0, 00:17:54.242 "state": "online", 00:17:54.242 "raid_level": "raid1", 00:17:54.242 "superblock": true, 00:17:54.242 "num_base_bdevs": 3, 00:17:54.242 "num_base_bdevs_discovered": 3, 00:17:54.242 "num_base_bdevs_operational": 3, 00:17:54.242 "base_bdevs_list": [ 00:17:54.242 { 00:17:54.242 "name": "pt1", 00:17:54.242 "uuid": "89d49d21-3281-5daf-ba57-22319c345277", 00:17:54.242 "is_configured": true, 00:17:54.242 "data_offset": 2048, 00:17:54.242 "data_size": 63488 00:17:54.242 }, 00:17:54.242 { 00:17:54.242 "name": "pt2", 00:17:54.242 "uuid": "757d5cff-bec1-591e-aaf9-4d4e819423dd", 00:17:54.242 "is_configured": true, 00:17:54.242 "data_offset": 2048, 00:17:54.242 "data_size": 63488 00:17:54.242 }, 00:17:54.242 { 00:17:54.242 "name": "pt3", 00:17:54.242 "uuid": "87d17db3-4020-5ec5-8ab3-3c67e5785806", 00:17:54.242 "is_configured": true, 00:17:54.242 "data_offset": 2048, 00:17:54.242 "data_size": 63488 00:17:54.242 } 00:17:54.242 ] 00:17:54.242 }' 00:17:54.242 16:56:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:54.242 16:56:43 -- common/autotest_common.sh@10 -- # set +x 00:17:55.206 16:56:43 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:55.206 16:56:43 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:55.206 [2024-11-05 16:56:43.997912] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.206 16:56:44 -- bdev/bdev_raid.sh@430 -- # '[' fe5cc0e3-9bed-4d3e-8c13-095196a02488 '!=' fe5cc0e3-9bed-4d3e-8c13-095196a02488 ']' 00:17:55.206 16:56:44 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:17:55.206 16:56:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:55.206 16:56:44 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:55.206 16:56:44 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:55.465 [2024-11-05 16:56:44.253803] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:55.465 16:56:44 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:55.465 16:56:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:55.465 16:56:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:55.465 16:56:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:55.465 16:56:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:55.465 16:56:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:55.465 16:56:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.465 16:56:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.465 16:56:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.465 16:56:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.465 16:56:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.465 16:56:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.724 16:56:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:55.724 "name": "raid_bdev1", 00:17:55.724 "uuid": "fe5cc0e3-9bed-4d3e-8c13-095196a02488", 00:17:55.724 "strip_size_kb": 0, 00:17:55.724 "state": "online", 00:17:55.724 "raid_level": "raid1", 00:17:55.724 "superblock": true, 00:17:55.724 "num_base_bdevs": 3, 00:17:55.724 "num_base_bdevs_discovered": 2, 00:17:55.724 "num_base_bdevs_operational": 2, 00:17:55.724 "base_bdevs_list": [ 00:17:55.724 { 00:17:55.724 "name": null, 00:17:55.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.724 "is_configured": false, 00:17:55.724 "data_offset": 2048, 00:17:55.724 "data_size": 63488 00:17:55.724 }, 00:17:55.724 { 00:17:55.724 "name": "pt2", 00:17:55.724 "uuid": "757d5cff-bec1-591e-aaf9-4d4e819423dd", 00:17:55.724 "is_configured": true, 00:17:55.724 "data_offset": 2048, 00:17:55.724 "data_size": 63488 00:17:55.724 }, 00:17:55.724 { 00:17:55.724 "name": "pt3", 00:17:55.724 "uuid": "87d17db3-4020-5ec5-8ab3-3c67e5785806", 00:17:55.724 "is_configured": true, 00:17:55.724 "data_offset": 2048, 00:17:55.724 "data_size": 63488 00:17:55.724 } 00:17:55.724 ] 00:17:55.724 }' 00:17:55.724 16:56:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:55.724 16:56:44 -- common/autotest_common.sh@10 -- # set +x 00:17:56.292 16:56:45 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:56.551 [2024-11-05 16:56:45.426024] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.551 [2024-11-05 16:56:45.426217] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.551 [2024-11-05 16:56:45.426404] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.551 [2024-11-05 16:56:45.426583] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.551 [2024-11-05 16:56:45.426720] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:17:56.551 16:56:45 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.551 16:56:45 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:17:56.810 16:56:45 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:17:56.810 16:56:45 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:17:56.810 16:56:45 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:17:56.810 16:56:45 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:56.810 16:56:45 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:57.069 16:56:45 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:57.069 16:56:45 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:57.069 16:56:45 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:57.329 16:56:46 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:57.329 16:56:46 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:57.329 16:56:46 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:17:57.329 16:56:46 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:57.329 16:56:46 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:57.588 [2024-11-05 16:56:46.338201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:57.588 [2024-11-05 16:56:46.338445] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.588 [2024-11-05 16:56:46.338523] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:57.588 [2024-11-05 16:56:46.338677] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.588 [2024-11-05 16:56:46.341447] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.588 [2024-11-05 16:56:46.341625] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:57.588 [2024-11-05 16:56:46.341867] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:57.588 [2024-11-05 16:56:46.342006] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:57.588 pt2 00:17:57.588 16:56:46 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:57.588 16:56:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:57.588 16:56:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:57.588 16:56:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:57.588 16:56:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:57.588 16:56:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:57.588 16:56:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.588 16:56:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.588 16:56:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.588 16:56:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.588 16:56:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.588 16:56:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.846 16:56:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.846 "name": "raid_bdev1", 00:17:57.847 "uuid": "fe5cc0e3-9bed-4d3e-8c13-095196a02488", 00:17:57.847 "strip_size_kb": 0, 00:17:57.847 "state": "configuring", 00:17:57.847 "raid_level": "raid1", 00:17:57.847 "superblock": true, 00:17:57.847 "num_base_bdevs": 3, 00:17:57.847 "num_base_bdevs_discovered": 1, 00:17:57.847 "num_base_bdevs_operational": 2, 00:17:57.847 "base_bdevs_list": [ 00:17:57.847 { 00:17:57.847 "name": null, 00:17:57.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.847 "is_configured": false, 00:17:57.847 "data_offset": 2048, 00:17:57.847 "data_size": 63488 00:17:57.847 }, 00:17:57.847 { 00:17:57.847 "name": "pt2", 00:17:57.847 "uuid": "757d5cff-bec1-591e-aaf9-4d4e819423dd", 00:17:57.847 "is_configured": true, 00:17:57.847 "data_offset": 2048, 00:17:57.847 "data_size": 63488 00:17:57.847 }, 00:17:57.847 { 00:17:57.847 "name": null, 00:17:57.847 "uuid": "87d17db3-4020-5ec5-8ab3-3c67e5785806", 00:17:57.847 "is_configured": false, 00:17:57.847 "data_offset": 2048, 00:17:57.847 "data_size": 63488 00:17:57.847 } 00:17:57.847 ] 00:17:57.847 }' 00:17:57.847 16:56:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.847 16:56:46 -- common/autotest_common.sh@10 -- # set +x 00:17:58.413 16:56:47 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:17:58.413 16:56:47 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:58.413 16:56:47 -- bdev/bdev_raid.sh@462 -- # i=2 00:17:58.413 16:56:47 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:58.672 [2024-11-05 16:56:47.374493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:58.672 [2024-11-05 16:56:47.374740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.672 [2024-11-05 16:56:47.374915] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:58.672 [2024-11-05 16:56:47.375048] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.672 [2024-11-05 16:56:47.375578] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.672 [2024-11-05 16:56:47.375735] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:58.672 [2024-11-05 16:56:47.375958] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:58.672 [2024-11-05 16:56:47.376088] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:58.672 [2024-11-05 16:56:47.376247] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:17:58.672 [2024-11-05 16:56:47.376363] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:58.672 [2024-11-05 16:56:47.376561] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:58.672 [2024-11-05 16:56:47.377005] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:17:58.672 [2024-11-05 16:56:47.377133] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:17:58.672 [2024-11-05 16:56:47.377354] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.672 pt3 00:17:58.672 16:56:47 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:58.672 16:56:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:58.672 16:56:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:58.672 16:56:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:58.672 16:56:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:58.672 16:56:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:58.672 16:56:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:58.672 16:56:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:58.672 16:56:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:58.672 16:56:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:58.672 16:56:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.672 16:56:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.931 16:56:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:58.931 "name": "raid_bdev1", 00:17:58.931 "uuid": "fe5cc0e3-9bed-4d3e-8c13-095196a02488", 00:17:58.931 "strip_size_kb": 0, 00:17:58.931 "state": "online", 00:17:58.931 "raid_level": "raid1", 00:17:58.931 "superblock": true, 00:17:58.931 "num_base_bdevs": 3, 00:17:58.931 "num_base_bdevs_discovered": 2, 00:17:58.931 "num_base_bdevs_operational": 2, 00:17:58.931 "base_bdevs_list": [ 00:17:58.931 { 00:17:58.931 "name": null, 00:17:58.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.931 "is_configured": false, 00:17:58.931 "data_offset": 2048, 00:17:58.931 "data_size": 63488 00:17:58.931 }, 00:17:58.931 { 00:17:58.931 "name": "pt2", 00:17:58.931 "uuid": "757d5cff-bec1-591e-aaf9-4d4e819423dd", 00:17:58.931 "is_configured": true, 00:17:58.931 "data_offset": 2048, 00:17:58.931 "data_size": 63488 00:17:58.931 }, 00:17:58.931 { 00:17:58.931 "name": "pt3", 00:17:58.931 "uuid": "87d17db3-4020-5ec5-8ab3-3c67e5785806", 00:17:58.931 "is_configured": true, 00:17:58.931 "data_offset": 2048, 00:17:58.931 "data_size": 63488 00:17:58.931 } 00:17:58.931 ] 00:17:58.931 }' 00:17:58.931 16:56:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:58.931 16:56:47 -- common/autotest_common.sh@10 -- # set +x 00:17:59.499 16:56:48 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:17:59.499 16:56:48 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:59.758 [2024-11-05 16:56:48.418731] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.758 [2024-11-05 16:56:48.418953] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.758 [2024-11-05 16:56:48.419143] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.758 [2024-11-05 16:56:48.419375] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.758 [2024-11-05 16:56:48.419487] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:17:59.758 16:56:48 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.758 16:56:48 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:18:00.017 16:56:48 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:18:00.017 16:56:48 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:18:00.017 16:56:48 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:00.276 [2024-11-05 16:56:48.922814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:00.276 [2024-11-05 16:56:48.923110] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.276 [2024-11-05 16:56:48.923316] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:00.276 [2024-11-05 16:56:48.923468] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.276 [2024-11-05 16:56:48.925981] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.276 [2024-11-05 16:56:48.926172] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:00.276 [2024-11-05 16:56:48.926423] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:00.276 [2024-11-05 16:56:48.926603] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.276 pt1 00:18:00.276 16:56:48 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:00.276 16:56:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:00.276 16:56:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:00.276 16:56:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:00.276 16:56:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:00.276 16:56:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:00.276 16:56:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.276 16:56:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.276 16:56:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.276 16:56:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.276 16:56:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.276 16:56:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.534 16:56:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:00.534 "name": "raid_bdev1", 00:18:00.534 "uuid": "fe5cc0e3-9bed-4d3e-8c13-095196a02488", 00:18:00.534 "strip_size_kb": 0, 00:18:00.534 "state": "configuring", 00:18:00.534 "raid_level": "raid1", 00:18:00.534 "superblock": true, 00:18:00.534 "num_base_bdevs": 3, 00:18:00.534 "num_base_bdevs_discovered": 1, 00:18:00.534 "num_base_bdevs_operational": 3, 00:18:00.534 "base_bdevs_list": [ 00:18:00.534 { 00:18:00.534 "name": "pt1", 00:18:00.534 "uuid": "89d49d21-3281-5daf-ba57-22319c345277", 00:18:00.534 "is_configured": true, 00:18:00.534 "data_offset": 2048, 00:18:00.534 "data_size": 63488 00:18:00.534 }, 00:18:00.534 { 00:18:00.534 "name": null, 00:18:00.534 "uuid": "757d5cff-bec1-591e-aaf9-4d4e819423dd", 00:18:00.534 "is_configured": false, 00:18:00.534 "data_offset": 2048, 00:18:00.534 "data_size": 63488 00:18:00.534 }, 00:18:00.534 { 00:18:00.534 "name": null, 00:18:00.534 "uuid": "87d17db3-4020-5ec5-8ab3-3c67e5785806", 00:18:00.534 "is_configured": false, 00:18:00.534 "data_offset": 2048, 00:18:00.534 "data_size": 63488 00:18:00.534 } 00:18:00.534 ] 00:18:00.534 }' 00:18:00.534 16:56:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:00.534 16:56:49 -- common/autotest_common.sh@10 -- # set +x 00:18:01.101 16:56:49 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:18:01.101 16:56:49 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:01.101 16:56:49 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:01.362 16:56:50 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:01.362 16:56:50 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:01.362 16:56:50 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:01.621 16:56:50 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:01.621 16:56:50 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:01.621 16:56:50 -- bdev/bdev_raid.sh@489 -- # i=2 00:18:01.621 16:56:50 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:01.621 [2024-11-05 16:56:50.467457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:01.621 [2024-11-05 16:56:50.467730] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.621 [2024-11-05 16:56:50.467909] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:01.621 [2024-11-05 16:56:50.468037] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.621 [2024-11-05 16:56:50.468589] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.621 [2024-11-05 16:56:50.468780] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:01.621 [2024-11-05 16:56:50.468989] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:01.621 [2024-11-05 16:56:50.469097] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:01.621 [2024-11-05 16:56:50.469189] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.621 [2024-11-05 16:56:50.469241] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state configuring 00:18:01.621 [2024-11-05 16:56:50.469422] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:01.621 pt3 00:18:01.621 16:56:50 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:01.621 16:56:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:01.621 16:56:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:01.621 16:56:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:01.621 16:56:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:01.621 16:56:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:01.621 16:56:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:01.621 16:56:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:01.621 16:56:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:01.621 16:56:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:01.621 16:56:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.621 16:56:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.880 16:56:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.880 "name": "raid_bdev1", 00:18:01.880 "uuid": "fe5cc0e3-9bed-4d3e-8c13-095196a02488", 00:18:01.880 "strip_size_kb": 0, 00:18:01.880 "state": "configuring", 00:18:01.880 "raid_level": "raid1", 00:18:01.880 "superblock": true, 00:18:01.880 "num_base_bdevs": 3, 00:18:01.880 "num_base_bdevs_discovered": 1, 00:18:01.880 "num_base_bdevs_operational": 2, 00:18:01.880 "base_bdevs_list": [ 00:18:01.880 { 00:18:01.880 "name": null, 00:18:01.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.880 "is_configured": false, 00:18:01.880 "data_offset": 2048, 00:18:01.880 "data_size": 63488 00:18:01.880 }, 00:18:01.880 { 00:18:01.880 "name": null, 00:18:01.880 "uuid": "757d5cff-bec1-591e-aaf9-4d4e819423dd", 00:18:01.880 "is_configured": false, 00:18:01.880 "data_offset": 2048, 00:18:01.880 "data_size": 63488 00:18:01.880 }, 00:18:01.880 { 00:18:01.880 "name": "pt3", 00:18:01.880 "uuid": "87d17db3-4020-5ec5-8ab3-3c67e5785806", 00:18:01.880 "is_configured": true, 00:18:01.880 "data_offset": 2048, 00:18:01.880 "data_size": 63488 00:18:01.880 } 00:18:01.880 ] 00:18:01.880 }' 00:18:01.880 16:56:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.880 16:56:50 -- common/autotest_common.sh@10 -- # set +x 00:18:02.447 16:56:51 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:18:02.447 16:56:51 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:02.447 16:56:51 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:02.706 [2024-11-05 16:56:51.499748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:02.706 [2024-11-05 16:56:51.500040] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.706 [2024-11-05 16:56:51.500188] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:02.706 [2024-11-05 16:56:51.500333] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.706 [2024-11-05 16:56:51.500932] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.706 [2024-11-05 16:56:51.501093] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:02.706 [2024-11-05 16:56:51.501284] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:02.706 [2024-11-05 16:56:51.501398] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:02.706 [2024-11-05 16:56:51.501617] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:18:02.706 [2024-11-05 16:56:51.501721] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:02.706 [2024-11-05 16:56:51.501887] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:18:02.706 [2024-11-05 16:56:51.502337] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:18:02.706 [2024-11-05 16:56:51.502464] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:18:02.706 [2024-11-05 16:56:51.502691] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.706 pt2 00:18:02.706 16:56:51 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:02.706 16:56:51 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:02.706 16:56:51 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:02.706 16:56:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:02.706 16:56:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:02.706 16:56:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:02.706 16:56:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:02.706 16:56:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:02.706 16:56:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:02.706 16:56:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:02.706 16:56:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:02.706 16:56:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:02.706 16:56:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.706 16:56:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.966 16:56:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:02.966 "name": "raid_bdev1", 00:18:02.966 "uuid": "fe5cc0e3-9bed-4d3e-8c13-095196a02488", 00:18:02.966 "strip_size_kb": 0, 00:18:02.966 "state": "online", 00:18:02.966 "raid_level": "raid1", 00:18:02.966 "superblock": true, 00:18:02.966 "num_base_bdevs": 3, 00:18:02.966 "num_base_bdevs_discovered": 2, 00:18:02.966 "num_base_bdevs_operational": 2, 00:18:02.966 "base_bdevs_list": [ 00:18:02.966 { 00:18:02.966 "name": null, 00:18:02.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.966 "is_configured": false, 00:18:02.966 "data_offset": 2048, 00:18:02.966 "data_size": 63488 00:18:02.966 }, 00:18:02.966 { 00:18:02.966 "name": "pt2", 00:18:02.966 "uuid": "757d5cff-bec1-591e-aaf9-4d4e819423dd", 00:18:02.966 "is_configured": true, 00:18:02.966 "data_offset": 2048, 00:18:02.966 "data_size": 63488 00:18:02.966 }, 00:18:02.966 { 00:18:02.966 "name": "pt3", 00:18:02.966 "uuid": "87d17db3-4020-5ec5-8ab3-3c67e5785806", 00:18:02.966 "is_configured": true, 00:18:02.966 "data_offset": 2048, 00:18:02.966 "data_size": 63488 00:18:02.966 } 00:18:02.966 ] 00:18:02.966 }' 00:18:02.966 16:56:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:02.966 16:56:51 -- common/autotest_common.sh@10 -- # set +x 00:18:03.534 16:56:52 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:03.534 16:56:52 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:18:03.793 [2024-11-05 16:56:52.540150] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.793 16:56:52 -- bdev/bdev_raid.sh@506 -- # '[' fe5cc0e3-9bed-4d3e-8c13-095196a02488 '!=' fe5cc0e3-9bed-4d3e-8c13-095196a02488 ']' 00:18:03.793 16:56:52 -- bdev/bdev_raid.sh@511 -- # killprocess 117834 00:18:03.793 16:56:52 -- common/autotest_common.sh@936 -- # '[' -z 117834 ']' 00:18:03.793 16:56:52 -- common/autotest_common.sh@940 -- # kill -0 117834 00:18:03.793 16:56:52 -- common/autotest_common.sh@941 -- # uname 00:18:03.793 16:56:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:03.793 16:56:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117834 00:18:03.793 killing process with pid 117834 00:18:03.793 16:56:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:03.793 16:56:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:03.793 16:56:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117834' 00:18:03.793 16:56:52 -- common/autotest_common.sh@955 -- # kill 117834 00:18:03.793 [2024-11-05 16:56:52.581763] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:03.793 16:56:52 -- common/autotest_common.sh@960 -- # wait 117834 00:18:03.793 [2024-11-05 16:56:52.581830] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.793 [2024-11-05 16:56:52.581886] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.793 [2024-11-05 16:56:52.581895] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:18:04.052 [2024-11-05 16:56:52.772680] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:04.987 ************************************ 00:18:04.987 END TEST raid_superblock_test 00:18:04.987 ************************************ 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:04.987 00:18:04.987 real 0m19.413s 00:18:04.987 user 0m35.756s 00:18:04.987 sys 0m2.154s 00:18:04.987 16:56:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:04.987 16:56:53 -- common/autotest_common.sh@10 -- # set +x 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:18:04.987 16:56:53 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:04.987 16:56:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:04.987 16:56:53 -- common/autotest_common.sh@10 -- # set +x 00:18:04.987 ************************************ 00:18:04.987 START TEST raid_state_function_test 00:18:04.987 ************************************ 00:18:04.987 16:56:53 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 false 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@226 -- # raid_pid=118445 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118445' 00:18:04.987 Process raid pid: 118445 00:18:04.987 16:56:53 -- bdev/bdev_raid.sh@228 -- # waitforlisten 118445 /var/tmp/spdk-raid.sock 00:18:04.987 16:56:53 -- common/autotest_common.sh@829 -- # '[' -z 118445 ']' 00:18:04.987 16:56:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:04.987 16:56:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.987 16:56:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:04.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:04.987 16:56:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.987 16:56:53 -- common/autotest_common.sh@10 -- # set +x 00:18:04.988 [2024-11-05 16:56:53.823000] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:04.988 [2024-11-05 16:56:53.823324] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.246 [2024-11-05 16:56:53.977356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.247 [2024-11-05 16:56:54.144719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.505 [2024-11-05 16:56:54.313629] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.073 16:56:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.073 16:56:54 -- common/autotest_common.sh@862 -- # return 0 00:18:06.073 16:56:54 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:06.073 [2024-11-05 16:56:54.958339] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:06.073 [2024-11-05 16:56:54.958597] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:06.073 [2024-11-05 16:56:54.958725] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:06.073 [2024-11-05 16:56:54.958788] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:06.073 [2024-11-05 16:56:54.958935] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:06.073 [2024-11-05 16:56:54.959018] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:06.073 [2024-11-05 16:56:54.959204] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:06.073 [2024-11-05 16:56:54.959306] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:06.332 16:56:54 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:06.332 16:56:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:06.332 16:56:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:06.332 16:56:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:06.332 16:56:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:06.332 16:56:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:06.332 16:56:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:06.332 16:56:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:06.332 16:56:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:06.332 16:56:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:06.332 16:56:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.332 16:56:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.332 16:56:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:06.332 "name": "Existed_Raid", 00:18:06.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.332 "strip_size_kb": 64, 00:18:06.332 "state": "configuring", 00:18:06.332 "raid_level": "raid0", 00:18:06.332 "superblock": false, 00:18:06.332 "num_base_bdevs": 4, 00:18:06.332 "num_base_bdevs_discovered": 0, 00:18:06.332 "num_base_bdevs_operational": 4, 00:18:06.332 "base_bdevs_list": [ 00:18:06.332 { 00:18:06.332 "name": "BaseBdev1", 00:18:06.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.332 "is_configured": false, 00:18:06.332 "data_offset": 0, 00:18:06.332 "data_size": 0 00:18:06.332 }, 00:18:06.332 { 00:18:06.332 "name": "BaseBdev2", 00:18:06.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.332 "is_configured": false, 00:18:06.332 "data_offset": 0, 00:18:06.332 "data_size": 0 00:18:06.332 }, 00:18:06.332 { 00:18:06.332 "name": "BaseBdev3", 00:18:06.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.332 "is_configured": false, 00:18:06.332 "data_offset": 0, 00:18:06.332 "data_size": 0 00:18:06.332 }, 00:18:06.332 { 00:18:06.332 "name": "BaseBdev4", 00:18:06.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.332 "is_configured": false, 00:18:06.332 "data_offset": 0, 00:18:06.332 "data_size": 0 00:18:06.332 } 00:18:06.332 ] 00:18:06.332 }' 00:18:06.332 16:56:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:06.332 16:56:55 -- common/autotest_common.sh@10 -- # set +x 00:18:07.267 16:56:55 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:07.267 [2024-11-05 16:56:56.046479] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:07.267 [2024-11-05 16:56:56.046690] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:07.267 16:56:56 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:07.526 [2024-11-05 16:56:56.246544] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:07.526 [2024-11-05 16:56:56.246803] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:07.526 [2024-11-05 16:56:56.246952] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:07.526 [2024-11-05 16:56:56.247082] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:07.526 [2024-11-05 16:56:56.247180] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:07.526 [2024-11-05 16:56:56.247277] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:07.526 [2024-11-05 16:56:56.247405] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:07.526 [2024-11-05 16:56:56.247538] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:07.526 16:56:56 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:07.785 [2024-11-05 16:56:56.471774] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:07.785 BaseBdev1 00:18:07.785 16:56:56 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:07.785 16:56:56 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:07.785 16:56:56 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:07.785 16:56:56 -- common/autotest_common.sh@899 -- # local i 00:18:07.785 16:56:56 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:07.785 16:56:56 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:07.785 16:56:56 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:08.043 16:56:56 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:08.043 [ 00:18:08.043 { 00:18:08.043 "name": "BaseBdev1", 00:18:08.043 "aliases": [ 00:18:08.043 "e4429781-35f1-4bf1-87a2-37380c4a01e4" 00:18:08.043 ], 00:18:08.043 "product_name": "Malloc disk", 00:18:08.043 "block_size": 512, 00:18:08.043 "num_blocks": 65536, 00:18:08.043 "uuid": "e4429781-35f1-4bf1-87a2-37380c4a01e4", 00:18:08.043 "assigned_rate_limits": { 00:18:08.043 "rw_ios_per_sec": 0, 00:18:08.043 "rw_mbytes_per_sec": 0, 00:18:08.043 "r_mbytes_per_sec": 0, 00:18:08.043 "w_mbytes_per_sec": 0 00:18:08.043 }, 00:18:08.043 "claimed": true, 00:18:08.043 "claim_type": "exclusive_write", 00:18:08.043 "zoned": false, 00:18:08.043 "supported_io_types": { 00:18:08.043 "read": true, 00:18:08.043 "write": true, 00:18:08.043 "unmap": true, 00:18:08.044 "write_zeroes": true, 00:18:08.044 "flush": true, 00:18:08.044 "reset": true, 00:18:08.044 "compare": false, 00:18:08.044 "compare_and_write": false, 00:18:08.044 "abort": true, 00:18:08.044 "nvme_admin": false, 00:18:08.044 "nvme_io": false 00:18:08.044 }, 00:18:08.044 "memory_domains": [ 00:18:08.044 { 00:18:08.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.044 "dma_device_type": 2 00:18:08.044 } 00:18:08.044 ], 00:18:08.044 "driver_specific": {} 00:18:08.044 } 00:18:08.044 ] 00:18:08.044 16:56:56 -- common/autotest_common.sh@905 -- # return 0 00:18:08.044 16:56:56 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:08.044 16:56:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:08.044 16:56:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:08.044 16:56:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:08.044 16:56:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:08.044 16:56:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:08.044 16:56:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:08.044 16:56:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:08.044 16:56:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:08.044 16:56:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:08.044 16:56:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.044 16:56:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.302 16:56:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:08.302 "name": "Existed_Raid", 00:18:08.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.302 "strip_size_kb": 64, 00:18:08.302 "state": "configuring", 00:18:08.302 "raid_level": "raid0", 00:18:08.302 "superblock": false, 00:18:08.302 "num_base_bdevs": 4, 00:18:08.302 "num_base_bdevs_discovered": 1, 00:18:08.302 "num_base_bdevs_operational": 4, 00:18:08.302 "base_bdevs_list": [ 00:18:08.302 { 00:18:08.302 "name": "BaseBdev1", 00:18:08.302 "uuid": "e4429781-35f1-4bf1-87a2-37380c4a01e4", 00:18:08.302 "is_configured": true, 00:18:08.302 "data_offset": 0, 00:18:08.302 "data_size": 65536 00:18:08.302 }, 00:18:08.302 { 00:18:08.302 "name": "BaseBdev2", 00:18:08.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.302 "is_configured": false, 00:18:08.302 "data_offset": 0, 00:18:08.302 "data_size": 0 00:18:08.302 }, 00:18:08.302 { 00:18:08.302 "name": "BaseBdev3", 00:18:08.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.302 "is_configured": false, 00:18:08.302 "data_offset": 0, 00:18:08.302 "data_size": 0 00:18:08.302 }, 00:18:08.302 { 00:18:08.302 "name": "BaseBdev4", 00:18:08.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.302 "is_configured": false, 00:18:08.302 "data_offset": 0, 00:18:08.302 "data_size": 0 00:18:08.302 } 00:18:08.302 ] 00:18:08.302 }' 00:18:08.302 16:56:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:08.302 16:56:57 -- common/autotest_common.sh@10 -- # set +x 00:18:08.870 16:56:57 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:09.128 [2024-11-05 16:56:57.984207] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:09.129 [2024-11-05 16:56:57.984447] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:09.129 16:56:57 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:09.129 16:56:57 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:09.387 [2024-11-05 16:56:58.240313] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.387 [2024-11-05 16:56:58.242451] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:09.387 [2024-11-05 16:56:58.242670] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:09.387 [2024-11-05 16:56:58.242779] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:09.387 [2024-11-05 16:56:58.242844] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:09.387 [2024-11-05 16:56:58.243003] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:09.387 [2024-11-05 16:56:58.243077] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:09.387 16:56:58 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:09.387 16:56:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:09.387 16:56:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:09.387 16:56:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:09.387 16:56:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:09.387 16:56:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:09.387 16:56:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:09.387 16:56:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:09.387 16:56:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:09.387 16:56:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:09.387 16:56:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:09.387 16:56:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:09.387 16:56:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.387 16:56:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.647 16:56:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:09.647 "name": "Existed_Raid", 00:18:09.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.647 "strip_size_kb": 64, 00:18:09.647 "state": "configuring", 00:18:09.647 "raid_level": "raid0", 00:18:09.647 "superblock": false, 00:18:09.647 "num_base_bdevs": 4, 00:18:09.647 "num_base_bdevs_discovered": 1, 00:18:09.647 "num_base_bdevs_operational": 4, 00:18:09.647 "base_bdevs_list": [ 00:18:09.647 { 00:18:09.647 "name": "BaseBdev1", 00:18:09.647 "uuid": "e4429781-35f1-4bf1-87a2-37380c4a01e4", 00:18:09.647 "is_configured": true, 00:18:09.647 "data_offset": 0, 00:18:09.647 "data_size": 65536 00:18:09.647 }, 00:18:09.647 { 00:18:09.647 "name": "BaseBdev2", 00:18:09.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.647 "is_configured": false, 00:18:09.647 "data_offset": 0, 00:18:09.647 "data_size": 0 00:18:09.647 }, 00:18:09.647 { 00:18:09.647 "name": "BaseBdev3", 00:18:09.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.647 "is_configured": false, 00:18:09.647 "data_offset": 0, 00:18:09.647 "data_size": 0 00:18:09.647 }, 00:18:09.647 { 00:18:09.647 "name": "BaseBdev4", 00:18:09.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.647 "is_configured": false, 00:18:09.647 "data_offset": 0, 00:18:09.647 "data_size": 0 00:18:09.647 } 00:18:09.647 ] 00:18:09.647 }' 00:18:09.647 16:56:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:09.647 16:56:58 -- common/autotest_common.sh@10 -- # set +x 00:18:10.214 16:56:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:10.473 [2024-11-05 16:56:59.322438] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.473 BaseBdev2 00:18:10.473 16:56:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:10.473 16:56:59 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:10.473 16:56:59 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:10.473 16:56:59 -- common/autotest_common.sh@899 -- # local i 00:18:10.473 16:56:59 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:10.473 16:56:59 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:10.473 16:56:59 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:10.732 16:56:59 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:10.991 [ 00:18:10.991 { 00:18:10.991 "name": "BaseBdev2", 00:18:10.991 "aliases": [ 00:18:10.991 "b4840f14-4a75-4afd-87b9-3976e7f27003" 00:18:10.991 ], 00:18:10.991 "product_name": "Malloc disk", 00:18:10.991 "block_size": 512, 00:18:10.992 "num_blocks": 65536, 00:18:10.992 "uuid": "b4840f14-4a75-4afd-87b9-3976e7f27003", 00:18:10.992 "assigned_rate_limits": { 00:18:10.992 "rw_ios_per_sec": 0, 00:18:10.992 "rw_mbytes_per_sec": 0, 00:18:10.992 "r_mbytes_per_sec": 0, 00:18:10.992 "w_mbytes_per_sec": 0 00:18:10.992 }, 00:18:10.992 "claimed": true, 00:18:10.992 "claim_type": "exclusive_write", 00:18:10.992 "zoned": false, 00:18:10.992 "supported_io_types": { 00:18:10.992 "read": true, 00:18:10.992 "write": true, 00:18:10.992 "unmap": true, 00:18:10.992 "write_zeroes": true, 00:18:10.992 "flush": true, 00:18:10.992 "reset": true, 00:18:10.992 "compare": false, 00:18:10.992 "compare_and_write": false, 00:18:10.992 "abort": true, 00:18:10.992 "nvme_admin": false, 00:18:10.992 "nvme_io": false 00:18:10.992 }, 00:18:10.992 "memory_domains": [ 00:18:10.992 { 00:18:10.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.992 "dma_device_type": 2 00:18:10.992 } 00:18:10.992 ], 00:18:10.992 "driver_specific": {} 00:18:10.992 } 00:18:10.992 ] 00:18:10.992 16:56:59 -- common/autotest_common.sh@905 -- # return 0 00:18:10.992 16:56:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:10.992 16:56:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:10.992 16:56:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:10.992 16:56:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:10.992 16:56:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:10.992 16:56:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:10.992 16:56:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:10.992 16:56:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:10.992 16:56:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:10.992 16:56:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:10.992 16:56:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:10.992 16:56:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:10.992 16:56:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.992 16:56:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.250 16:57:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:11.250 "name": "Existed_Raid", 00:18:11.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.250 "strip_size_kb": 64, 00:18:11.250 "state": "configuring", 00:18:11.250 "raid_level": "raid0", 00:18:11.250 "superblock": false, 00:18:11.250 "num_base_bdevs": 4, 00:18:11.250 "num_base_bdevs_discovered": 2, 00:18:11.250 "num_base_bdevs_operational": 4, 00:18:11.250 "base_bdevs_list": [ 00:18:11.250 { 00:18:11.250 "name": "BaseBdev1", 00:18:11.250 "uuid": "e4429781-35f1-4bf1-87a2-37380c4a01e4", 00:18:11.250 "is_configured": true, 00:18:11.250 "data_offset": 0, 00:18:11.250 "data_size": 65536 00:18:11.250 }, 00:18:11.250 { 00:18:11.250 "name": "BaseBdev2", 00:18:11.250 "uuid": "b4840f14-4a75-4afd-87b9-3976e7f27003", 00:18:11.250 "is_configured": true, 00:18:11.250 "data_offset": 0, 00:18:11.250 "data_size": 65536 00:18:11.250 }, 00:18:11.250 { 00:18:11.250 "name": "BaseBdev3", 00:18:11.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.250 "is_configured": false, 00:18:11.250 "data_offset": 0, 00:18:11.250 "data_size": 0 00:18:11.250 }, 00:18:11.250 { 00:18:11.250 "name": "BaseBdev4", 00:18:11.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.250 "is_configured": false, 00:18:11.250 "data_offset": 0, 00:18:11.250 "data_size": 0 00:18:11.250 } 00:18:11.250 ] 00:18:11.250 }' 00:18:11.250 16:57:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:11.250 16:57:00 -- common/autotest_common.sh@10 -- # set +x 00:18:11.815 16:57:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:12.074 [2024-11-05 16:57:00.960810] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:12.074 BaseBdev3 00:18:12.333 16:57:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:12.333 16:57:00 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:12.333 16:57:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:12.333 16:57:00 -- common/autotest_common.sh@899 -- # local i 00:18:12.333 16:57:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:12.333 16:57:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:12.333 16:57:00 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:12.333 16:57:01 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:12.592 [ 00:18:12.592 { 00:18:12.592 "name": "BaseBdev3", 00:18:12.592 "aliases": [ 00:18:12.592 "1c5a25ff-dfaf-415a-9d90-5b42eb1d31cc" 00:18:12.592 ], 00:18:12.592 "product_name": "Malloc disk", 00:18:12.592 "block_size": 512, 00:18:12.592 "num_blocks": 65536, 00:18:12.592 "uuid": "1c5a25ff-dfaf-415a-9d90-5b42eb1d31cc", 00:18:12.592 "assigned_rate_limits": { 00:18:12.592 "rw_ios_per_sec": 0, 00:18:12.592 "rw_mbytes_per_sec": 0, 00:18:12.592 "r_mbytes_per_sec": 0, 00:18:12.592 "w_mbytes_per_sec": 0 00:18:12.592 }, 00:18:12.592 "claimed": true, 00:18:12.592 "claim_type": "exclusive_write", 00:18:12.592 "zoned": false, 00:18:12.592 "supported_io_types": { 00:18:12.592 "read": true, 00:18:12.592 "write": true, 00:18:12.592 "unmap": true, 00:18:12.592 "write_zeroes": true, 00:18:12.592 "flush": true, 00:18:12.592 "reset": true, 00:18:12.592 "compare": false, 00:18:12.592 "compare_and_write": false, 00:18:12.592 "abort": true, 00:18:12.592 "nvme_admin": false, 00:18:12.592 "nvme_io": false 00:18:12.592 }, 00:18:12.592 "memory_domains": [ 00:18:12.592 { 00:18:12.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.592 "dma_device_type": 2 00:18:12.592 } 00:18:12.592 ], 00:18:12.592 "driver_specific": {} 00:18:12.592 } 00:18:12.592 ] 00:18:12.592 16:57:01 -- common/autotest_common.sh@905 -- # return 0 00:18:12.592 16:57:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:12.592 16:57:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:12.592 16:57:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:12.592 16:57:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:12.592 16:57:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:12.592 16:57:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:12.592 16:57:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:12.592 16:57:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:12.593 16:57:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:12.593 16:57:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:12.593 16:57:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:12.593 16:57:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:12.593 16:57:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.593 16:57:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.852 16:57:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.852 "name": "Existed_Raid", 00:18:12.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.852 "strip_size_kb": 64, 00:18:12.852 "state": "configuring", 00:18:12.852 "raid_level": "raid0", 00:18:12.852 "superblock": false, 00:18:12.852 "num_base_bdevs": 4, 00:18:12.852 "num_base_bdevs_discovered": 3, 00:18:12.852 "num_base_bdevs_operational": 4, 00:18:12.852 "base_bdevs_list": [ 00:18:12.852 { 00:18:12.852 "name": "BaseBdev1", 00:18:12.852 "uuid": "e4429781-35f1-4bf1-87a2-37380c4a01e4", 00:18:12.852 "is_configured": true, 00:18:12.852 "data_offset": 0, 00:18:12.852 "data_size": 65536 00:18:12.852 }, 00:18:12.852 { 00:18:12.852 "name": "BaseBdev2", 00:18:12.852 "uuid": "b4840f14-4a75-4afd-87b9-3976e7f27003", 00:18:12.852 "is_configured": true, 00:18:12.852 "data_offset": 0, 00:18:12.852 "data_size": 65536 00:18:12.852 }, 00:18:12.852 { 00:18:12.852 "name": "BaseBdev3", 00:18:12.852 "uuid": "1c5a25ff-dfaf-415a-9d90-5b42eb1d31cc", 00:18:12.852 "is_configured": true, 00:18:12.852 "data_offset": 0, 00:18:12.852 "data_size": 65536 00:18:12.852 }, 00:18:12.852 { 00:18:12.852 "name": "BaseBdev4", 00:18:12.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.852 "is_configured": false, 00:18:12.852 "data_offset": 0, 00:18:12.852 "data_size": 0 00:18:12.852 } 00:18:12.852 ] 00:18:12.852 }' 00:18:12.852 16:57:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.852 16:57:01 -- common/autotest_common.sh@10 -- # set +x 00:18:13.420 16:57:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:13.679 [2024-11-05 16:57:02.482812] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:13.679 [2024-11-05 16:57:02.483166] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:18:13.679 [2024-11-05 16:57:02.483284] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:13.679 [2024-11-05 16:57:02.483525] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:13.679 [2024-11-05 16:57:02.484104] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:18:13.679 [2024-11-05 16:57:02.484268] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:18:13.679 [2024-11-05 16:57:02.484711] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.679 BaseBdev4 00:18:13.679 16:57:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:13.679 16:57:02 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:13.679 16:57:02 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:13.679 16:57:02 -- common/autotest_common.sh@899 -- # local i 00:18:13.679 16:57:02 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:13.679 16:57:02 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:13.679 16:57:02 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:13.939 16:57:02 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:14.198 [ 00:18:14.198 { 00:18:14.198 "name": "BaseBdev4", 00:18:14.198 "aliases": [ 00:18:14.198 "6386c05f-a450-44de-aa49-db8607aee461" 00:18:14.198 ], 00:18:14.198 "product_name": "Malloc disk", 00:18:14.198 "block_size": 512, 00:18:14.198 "num_blocks": 65536, 00:18:14.198 "uuid": "6386c05f-a450-44de-aa49-db8607aee461", 00:18:14.198 "assigned_rate_limits": { 00:18:14.198 "rw_ios_per_sec": 0, 00:18:14.198 "rw_mbytes_per_sec": 0, 00:18:14.198 "r_mbytes_per_sec": 0, 00:18:14.198 "w_mbytes_per_sec": 0 00:18:14.198 }, 00:18:14.198 "claimed": true, 00:18:14.198 "claim_type": "exclusive_write", 00:18:14.198 "zoned": false, 00:18:14.198 "supported_io_types": { 00:18:14.198 "read": true, 00:18:14.198 "write": true, 00:18:14.198 "unmap": true, 00:18:14.198 "write_zeroes": true, 00:18:14.198 "flush": true, 00:18:14.198 "reset": true, 00:18:14.198 "compare": false, 00:18:14.198 "compare_and_write": false, 00:18:14.198 "abort": true, 00:18:14.198 "nvme_admin": false, 00:18:14.198 "nvme_io": false 00:18:14.198 }, 00:18:14.198 "memory_domains": [ 00:18:14.198 { 00:18:14.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.198 "dma_device_type": 2 00:18:14.198 } 00:18:14.198 ], 00:18:14.198 "driver_specific": {} 00:18:14.198 } 00:18:14.198 ] 00:18:14.198 16:57:02 -- common/autotest_common.sh@905 -- # return 0 00:18:14.198 16:57:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:14.198 16:57:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:14.198 16:57:02 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:14.198 16:57:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:14.198 16:57:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:14.198 16:57:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:14.198 16:57:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:14.198 16:57:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:14.198 16:57:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:14.198 16:57:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:14.198 16:57:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:14.198 16:57:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:14.198 16:57:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.198 16:57:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.456 16:57:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:14.456 "name": "Existed_Raid", 00:18:14.456 "uuid": "b166b4a9-8e5c-49e3-9813-7da018823e2f", 00:18:14.456 "strip_size_kb": 64, 00:18:14.456 "state": "online", 00:18:14.456 "raid_level": "raid0", 00:18:14.456 "superblock": false, 00:18:14.456 "num_base_bdevs": 4, 00:18:14.456 "num_base_bdevs_discovered": 4, 00:18:14.456 "num_base_bdevs_operational": 4, 00:18:14.456 "base_bdevs_list": [ 00:18:14.456 { 00:18:14.456 "name": "BaseBdev1", 00:18:14.456 "uuid": "e4429781-35f1-4bf1-87a2-37380c4a01e4", 00:18:14.456 "is_configured": true, 00:18:14.457 "data_offset": 0, 00:18:14.457 "data_size": 65536 00:18:14.457 }, 00:18:14.457 { 00:18:14.457 "name": "BaseBdev2", 00:18:14.457 "uuid": "b4840f14-4a75-4afd-87b9-3976e7f27003", 00:18:14.457 "is_configured": true, 00:18:14.457 "data_offset": 0, 00:18:14.457 "data_size": 65536 00:18:14.457 }, 00:18:14.457 { 00:18:14.457 "name": "BaseBdev3", 00:18:14.457 "uuid": "1c5a25ff-dfaf-415a-9d90-5b42eb1d31cc", 00:18:14.457 "is_configured": true, 00:18:14.457 "data_offset": 0, 00:18:14.457 "data_size": 65536 00:18:14.457 }, 00:18:14.457 { 00:18:14.457 "name": "BaseBdev4", 00:18:14.457 "uuid": "6386c05f-a450-44de-aa49-db8607aee461", 00:18:14.457 "is_configured": true, 00:18:14.457 "data_offset": 0, 00:18:14.457 "data_size": 65536 00:18:14.457 } 00:18:14.457 ] 00:18:14.457 }' 00:18:14.457 16:57:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:14.457 16:57:03 -- common/autotest_common.sh@10 -- # set +x 00:18:15.025 16:57:03 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:15.283 [2024-11-05 16:57:04.035516] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:15.283 [2024-11-05 16:57:04.035721] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.283 [2024-11-05 16:57:04.035891] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.283 16:57:04 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:15.283 16:57:04 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:15.283 16:57:04 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:15.283 16:57:04 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:15.283 16:57:04 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:15.283 16:57:04 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:15.283 16:57:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:15.283 16:57:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:15.283 16:57:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:15.283 16:57:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:15.283 16:57:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:15.283 16:57:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:15.283 16:57:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:15.283 16:57:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:15.283 16:57:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:15.283 16:57:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.283 16:57:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.541 16:57:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:15.541 "name": "Existed_Raid", 00:18:15.541 "uuid": "b166b4a9-8e5c-49e3-9813-7da018823e2f", 00:18:15.541 "strip_size_kb": 64, 00:18:15.541 "state": "offline", 00:18:15.541 "raid_level": "raid0", 00:18:15.541 "superblock": false, 00:18:15.541 "num_base_bdevs": 4, 00:18:15.541 "num_base_bdevs_discovered": 3, 00:18:15.541 "num_base_bdevs_operational": 3, 00:18:15.541 "base_bdevs_list": [ 00:18:15.541 { 00:18:15.541 "name": null, 00:18:15.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.541 "is_configured": false, 00:18:15.541 "data_offset": 0, 00:18:15.541 "data_size": 65536 00:18:15.541 }, 00:18:15.541 { 00:18:15.541 "name": "BaseBdev2", 00:18:15.541 "uuid": "b4840f14-4a75-4afd-87b9-3976e7f27003", 00:18:15.541 "is_configured": true, 00:18:15.541 "data_offset": 0, 00:18:15.541 "data_size": 65536 00:18:15.541 }, 00:18:15.541 { 00:18:15.541 "name": "BaseBdev3", 00:18:15.541 "uuid": "1c5a25ff-dfaf-415a-9d90-5b42eb1d31cc", 00:18:15.541 "is_configured": true, 00:18:15.541 "data_offset": 0, 00:18:15.541 "data_size": 65536 00:18:15.541 }, 00:18:15.541 { 00:18:15.541 "name": "BaseBdev4", 00:18:15.541 "uuid": "6386c05f-a450-44de-aa49-db8607aee461", 00:18:15.541 "is_configured": true, 00:18:15.541 "data_offset": 0, 00:18:15.541 "data_size": 65536 00:18:15.541 } 00:18:15.541 ] 00:18:15.541 }' 00:18:15.541 16:57:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:15.541 16:57:04 -- common/autotest_common.sh@10 -- # set +x 00:18:16.478 16:57:05 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:16.478 16:57:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:16.478 16:57:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.478 16:57:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:16.478 16:57:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:16.478 16:57:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:16.478 16:57:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:16.737 [2024-11-05 16:57:05.430862] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:16.737 16:57:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:16.737 16:57:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:16.737 16:57:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.737 16:57:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:16.996 16:57:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:16.996 16:57:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:16.996 16:57:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:17.255 [2024-11-05 16:57:06.001223] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:17.255 16:57:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:17.255 16:57:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:17.255 16:57:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.255 16:57:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:17.514 16:57:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:17.514 16:57:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:17.514 16:57:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:17.773 [2024-11-05 16:57:06.558720] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:17.773 [2024-11-05 16:57:06.558978] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:18:17.773 16:57:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:17.773 16:57:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:17.773 16:57:06 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.773 16:57:06 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:18.032 16:57:06 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:18.032 16:57:06 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:18.032 16:57:06 -- bdev/bdev_raid.sh@287 -- # killprocess 118445 00:18:18.032 16:57:06 -- common/autotest_common.sh@936 -- # '[' -z 118445 ']' 00:18:18.032 16:57:06 -- common/autotest_common.sh@940 -- # kill -0 118445 00:18:18.032 16:57:06 -- common/autotest_common.sh@941 -- # uname 00:18:18.032 16:57:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.032 16:57:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 118445 00:18:18.032 killing process with pid 118445 00:18:18.032 16:57:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:18.032 16:57:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:18.032 16:57:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 118445' 00:18:18.032 16:57:06 -- common/autotest_common.sh@955 -- # kill 118445 00:18:18.032 16:57:06 -- common/autotest_common.sh@960 -- # wait 118445 00:18:18.032 [2024-11-05 16:57:06.854608] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:18.032 [2024-11-05 16:57:06.854796] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:18.969 ************************************ 00:18:18.969 END TEST raid_state_function_test 00:18:18.969 ************************************ 00:18:18.969 16:57:07 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:18.969 00:18:18.969 real 0m14.047s 00:18:18.969 user 0m25.076s 00:18:18.969 sys 0m1.674s 00:18:18.969 16:57:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:18.969 16:57:07 -- common/autotest_common.sh@10 -- # set +x 00:18:18.969 16:57:07 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:18:18.969 16:57:07 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:18.969 16:57:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:18.969 16:57:07 -- common/autotest_common.sh@10 -- # set +x 00:18:19.227 ************************************ 00:18:19.227 START TEST raid_state_function_test_sb 00:18:19.227 ************************************ 00:18:19.227 16:57:07 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 true 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@226 -- # raid_pid=118885 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118885' 00:18:19.228 Process raid pid: 118885 00:18:19.228 16:57:07 -- bdev/bdev_raid.sh@228 -- # waitforlisten 118885 /var/tmp/spdk-raid.sock 00:18:19.228 16:57:07 -- common/autotest_common.sh@829 -- # '[' -z 118885 ']' 00:18:19.228 16:57:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:19.228 16:57:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.228 16:57:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:19.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:19.228 16:57:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.228 16:57:07 -- common/autotest_common.sh@10 -- # set +x 00:18:19.228 [2024-11-05 16:57:07.946686] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:19.228 [2024-11-05 16:57:07.947105] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.228 [2024-11-05 16:57:08.114824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.487 [2024-11-05 16:57:08.284645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.746 [2024-11-05 16:57:08.460907] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.006 16:57:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.006 16:57:08 -- common/autotest_common.sh@862 -- # return 0 00:18:20.006 16:57:08 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:20.267 [2024-11-05 16:57:09.070279] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:20.267 [2024-11-05 16:57:09.070520] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:20.267 [2024-11-05 16:57:09.070628] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:20.267 [2024-11-05 16:57:09.070692] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:20.267 [2024-11-05 16:57:09.070782] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:20.267 [2024-11-05 16:57:09.070856] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:20.267 [2024-11-05 16:57:09.071011] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:20.267 [2024-11-05 16:57:09.071095] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:20.267 16:57:09 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:20.267 16:57:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:20.267 16:57:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:20.267 16:57:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:20.267 16:57:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:20.267 16:57:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:20.267 16:57:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:20.267 16:57:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:20.267 16:57:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:20.267 16:57:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:20.267 16:57:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.267 16:57:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.526 16:57:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:20.526 "name": "Existed_Raid", 00:18:20.526 "uuid": "e0d310b5-0c5d-4a46-ac3b-f48f08e681e1", 00:18:20.526 "strip_size_kb": 64, 00:18:20.526 "state": "configuring", 00:18:20.526 "raid_level": "raid0", 00:18:20.526 "superblock": true, 00:18:20.526 "num_base_bdevs": 4, 00:18:20.526 "num_base_bdevs_discovered": 0, 00:18:20.526 "num_base_bdevs_operational": 4, 00:18:20.526 "base_bdevs_list": [ 00:18:20.526 { 00:18:20.526 "name": "BaseBdev1", 00:18:20.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.526 "is_configured": false, 00:18:20.526 "data_offset": 0, 00:18:20.526 "data_size": 0 00:18:20.526 }, 00:18:20.526 { 00:18:20.526 "name": "BaseBdev2", 00:18:20.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.526 "is_configured": false, 00:18:20.526 "data_offset": 0, 00:18:20.526 "data_size": 0 00:18:20.526 }, 00:18:20.526 { 00:18:20.526 "name": "BaseBdev3", 00:18:20.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.526 "is_configured": false, 00:18:20.526 "data_offset": 0, 00:18:20.526 "data_size": 0 00:18:20.526 }, 00:18:20.526 { 00:18:20.526 "name": "BaseBdev4", 00:18:20.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.526 "is_configured": false, 00:18:20.526 "data_offset": 0, 00:18:20.526 "data_size": 0 00:18:20.526 } 00:18:20.526 ] 00:18:20.526 }' 00:18:20.526 16:57:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:20.526 16:57:09 -- common/autotest_common.sh@10 -- # set +x 00:18:21.093 16:57:09 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:21.352 [2024-11-05 16:57:10.174509] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:21.352 [2024-11-05 16:57:10.174791] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:21.352 16:57:10 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:21.610 [2024-11-05 16:57:10.434581] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:21.610 [2024-11-05 16:57:10.434831] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:21.610 [2024-11-05 16:57:10.434986] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:21.610 [2024-11-05 16:57:10.435056] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:21.610 [2024-11-05 16:57:10.435156] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:21.610 [2024-11-05 16:57:10.435234] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:21.610 [2024-11-05 16:57:10.435438] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:21.610 [2024-11-05 16:57:10.435501] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:21.610 16:57:10 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:21.868 [2024-11-05 16:57:10.666591] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:21.868 BaseBdev1 00:18:21.868 16:57:10 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:21.868 16:57:10 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:21.868 16:57:10 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:21.868 16:57:10 -- common/autotest_common.sh@899 -- # local i 00:18:21.868 16:57:10 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:21.868 16:57:10 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:21.868 16:57:10 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:22.127 16:57:10 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:22.386 [ 00:18:22.386 { 00:18:22.386 "name": "BaseBdev1", 00:18:22.386 "aliases": [ 00:18:22.386 "38a16d8e-0c46-4e2c-9968-63472967a1e9" 00:18:22.386 ], 00:18:22.386 "product_name": "Malloc disk", 00:18:22.386 "block_size": 512, 00:18:22.386 "num_blocks": 65536, 00:18:22.386 "uuid": "38a16d8e-0c46-4e2c-9968-63472967a1e9", 00:18:22.386 "assigned_rate_limits": { 00:18:22.386 "rw_ios_per_sec": 0, 00:18:22.386 "rw_mbytes_per_sec": 0, 00:18:22.386 "r_mbytes_per_sec": 0, 00:18:22.386 "w_mbytes_per_sec": 0 00:18:22.386 }, 00:18:22.386 "claimed": true, 00:18:22.386 "claim_type": "exclusive_write", 00:18:22.386 "zoned": false, 00:18:22.386 "supported_io_types": { 00:18:22.386 "read": true, 00:18:22.386 "write": true, 00:18:22.386 "unmap": true, 00:18:22.386 "write_zeroes": true, 00:18:22.386 "flush": true, 00:18:22.386 "reset": true, 00:18:22.386 "compare": false, 00:18:22.386 "compare_and_write": false, 00:18:22.386 "abort": true, 00:18:22.386 "nvme_admin": false, 00:18:22.386 "nvme_io": false 00:18:22.386 }, 00:18:22.386 "memory_domains": [ 00:18:22.386 { 00:18:22.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.386 "dma_device_type": 2 00:18:22.386 } 00:18:22.386 ], 00:18:22.386 "driver_specific": {} 00:18:22.386 } 00:18:22.386 ] 00:18:22.386 16:57:11 -- common/autotest_common.sh@905 -- # return 0 00:18:22.386 16:57:11 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:22.386 16:57:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:22.386 16:57:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:22.386 16:57:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:22.386 16:57:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:22.386 16:57:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:22.386 16:57:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:22.386 16:57:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:22.386 16:57:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:22.386 16:57:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:22.386 16:57:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.386 16:57:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.645 16:57:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:22.645 "name": "Existed_Raid", 00:18:22.645 "uuid": "1d3a179b-fc5a-46b5-aad2-a4af41c0cf73", 00:18:22.645 "strip_size_kb": 64, 00:18:22.645 "state": "configuring", 00:18:22.645 "raid_level": "raid0", 00:18:22.645 "superblock": true, 00:18:22.645 "num_base_bdevs": 4, 00:18:22.645 "num_base_bdevs_discovered": 1, 00:18:22.645 "num_base_bdevs_operational": 4, 00:18:22.645 "base_bdevs_list": [ 00:18:22.645 { 00:18:22.645 "name": "BaseBdev1", 00:18:22.645 "uuid": "38a16d8e-0c46-4e2c-9968-63472967a1e9", 00:18:22.645 "is_configured": true, 00:18:22.645 "data_offset": 2048, 00:18:22.645 "data_size": 63488 00:18:22.645 }, 00:18:22.645 { 00:18:22.645 "name": "BaseBdev2", 00:18:22.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.645 "is_configured": false, 00:18:22.645 "data_offset": 0, 00:18:22.645 "data_size": 0 00:18:22.645 }, 00:18:22.645 { 00:18:22.645 "name": "BaseBdev3", 00:18:22.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.645 "is_configured": false, 00:18:22.645 "data_offset": 0, 00:18:22.645 "data_size": 0 00:18:22.645 }, 00:18:22.645 { 00:18:22.645 "name": "BaseBdev4", 00:18:22.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.645 "is_configured": false, 00:18:22.645 "data_offset": 0, 00:18:22.645 "data_size": 0 00:18:22.645 } 00:18:22.645 ] 00:18:22.645 }' 00:18:22.645 16:57:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:22.645 16:57:11 -- common/autotest_common.sh@10 -- # set +x 00:18:23.212 16:57:11 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:23.471 [2024-11-05 16:57:12.195027] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:23.471 [2024-11-05 16:57:12.195094] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:23.471 16:57:12 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:23.471 16:57:12 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:23.730 16:57:12 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:23.989 BaseBdev1 00:18:23.989 16:57:12 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:23.989 16:57:12 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:23.989 16:57:12 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:23.989 16:57:12 -- common/autotest_common.sh@899 -- # local i 00:18:23.989 16:57:12 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:23.989 16:57:12 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:23.989 16:57:12 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:24.248 16:57:13 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:24.507 [ 00:18:24.507 { 00:18:24.507 "name": "BaseBdev1", 00:18:24.507 "aliases": [ 00:18:24.507 "6c230df8-d4eb-46f3-9d8e-97951a6991f1" 00:18:24.507 ], 00:18:24.507 "product_name": "Malloc disk", 00:18:24.507 "block_size": 512, 00:18:24.507 "num_blocks": 65536, 00:18:24.507 "uuid": "6c230df8-d4eb-46f3-9d8e-97951a6991f1", 00:18:24.507 "assigned_rate_limits": { 00:18:24.507 "rw_ios_per_sec": 0, 00:18:24.507 "rw_mbytes_per_sec": 0, 00:18:24.507 "r_mbytes_per_sec": 0, 00:18:24.507 "w_mbytes_per_sec": 0 00:18:24.507 }, 00:18:24.507 "claimed": false, 00:18:24.507 "zoned": false, 00:18:24.507 "supported_io_types": { 00:18:24.507 "read": true, 00:18:24.507 "write": true, 00:18:24.507 "unmap": true, 00:18:24.507 "write_zeroes": true, 00:18:24.507 "flush": true, 00:18:24.507 "reset": true, 00:18:24.507 "compare": false, 00:18:24.507 "compare_and_write": false, 00:18:24.507 "abort": true, 00:18:24.507 "nvme_admin": false, 00:18:24.507 "nvme_io": false 00:18:24.507 }, 00:18:24.507 "memory_domains": [ 00:18:24.507 { 00:18:24.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.507 "dma_device_type": 2 00:18:24.507 } 00:18:24.507 ], 00:18:24.507 "driver_specific": {} 00:18:24.507 } 00:18:24.507 ] 00:18:24.507 16:57:13 -- common/autotest_common.sh@905 -- # return 0 00:18:24.507 16:57:13 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:24.766 [2024-11-05 16:57:13.445233] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:24.766 [2024-11-05 16:57:13.447191] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:24.766 [2024-11-05 16:57:13.447278] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:24.766 [2024-11-05 16:57:13.447305] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:24.766 [2024-11-05 16:57:13.447347] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:24.766 [2024-11-05 16:57:13.447354] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:24.766 [2024-11-05 16:57:13.447380] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:24.766 16:57:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:24.766 16:57:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:24.766 16:57:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:24.766 16:57:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:24.766 16:57:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:24.766 16:57:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:24.766 16:57:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:24.766 16:57:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:24.766 16:57:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:24.766 16:57:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:24.766 16:57:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:24.766 16:57:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:24.767 16:57:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.767 16:57:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.025 16:57:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:25.025 "name": "Existed_Raid", 00:18:25.025 "uuid": "1747c796-feb2-43f9-975f-5840edeef94d", 00:18:25.025 "strip_size_kb": 64, 00:18:25.025 "state": "configuring", 00:18:25.025 "raid_level": "raid0", 00:18:25.025 "superblock": true, 00:18:25.025 "num_base_bdevs": 4, 00:18:25.025 "num_base_bdevs_discovered": 1, 00:18:25.025 "num_base_bdevs_operational": 4, 00:18:25.025 "base_bdevs_list": [ 00:18:25.025 { 00:18:25.025 "name": "BaseBdev1", 00:18:25.025 "uuid": "6c230df8-d4eb-46f3-9d8e-97951a6991f1", 00:18:25.025 "is_configured": true, 00:18:25.026 "data_offset": 2048, 00:18:25.026 "data_size": 63488 00:18:25.026 }, 00:18:25.026 { 00:18:25.026 "name": "BaseBdev2", 00:18:25.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.026 "is_configured": false, 00:18:25.026 "data_offset": 0, 00:18:25.026 "data_size": 0 00:18:25.026 }, 00:18:25.026 { 00:18:25.026 "name": "BaseBdev3", 00:18:25.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.026 "is_configured": false, 00:18:25.026 "data_offset": 0, 00:18:25.026 "data_size": 0 00:18:25.026 }, 00:18:25.026 { 00:18:25.026 "name": "BaseBdev4", 00:18:25.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.026 "is_configured": false, 00:18:25.026 "data_offset": 0, 00:18:25.026 "data_size": 0 00:18:25.026 } 00:18:25.026 ] 00:18:25.026 }' 00:18:25.026 16:57:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:25.026 16:57:13 -- common/autotest_common.sh@10 -- # set +x 00:18:25.592 16:57:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:25.851 [2024-11-05 16:57:14.580727] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:25.851 BaseBdev2 00:18:25.851 16:57:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:25.851 16:57:14 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:25.851 16:57:14 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:25.851 16:57:14 -- common/autotest_common.sh@899 -- # local i 00:18:25.851 16:57:14 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:25.851 16:57:14 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:25.851 16:57:14 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:26.139 16:57:14 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:26.398 [ 00:18:26.398 { 00:18:26.398 "name": "BaseBdev2", 00:18:26.398 "aliases": [ 00:18:26.398 "05c8cbc7-30a0-4554-b1d5-1372d87fe222" 00:18:26.398 ], 00:18:26.398 "product_name": "Malloc disk", 00:18:26.398 "block_size": 512, 00:18:26.398 "num_blocks": 65536, 00:18:26.398 "uuid": "05c8cbc7-30a0-4554-b1d5-1372d87fe222", 00:18:26.398 "assigned_rate_limits": { 00:18:26.398 "rw_ios_per_sec": 0, 00:18:26.398 "rw_mbytes_per_sec": 0, 00:18:26.398 "r_mbytes_per_sec": 0, 00:18:26.398 "w_mbytes_per_sec": 0 00:18:26.398 }, 00:18:26.398 "claimed": true, 00:18:26.398 "claim_type": "exclusive_write", 00:18:26.398 "zoned": false, 00:18:26.398 "supported_io_types": { 00:18:26.398 "read": true, 00:18:26.398 "write": true, 00:18:26.398 "unmap": true, 00:18:26.398 "write_zeroes": true, 00:18:26.398 "flush": true, 00:18:26.398 "reset": true, 00:18:26.398 "compare": false, 00:18:26.398 "compare_and_write": false, 00:18:26.398 "abort": true, 00:18:26.398 "nvme_admin": false, 00:18:26.398 "nvme_io": false 00:18:26.398 }, 00:18:26.398 "memory_domains": [ 00:18:26.398 { 00:18:26.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.398 "dma_device_type": 2 00:18:26.398 } 00:18:26.398 ], 00:18:26.398 "driver_specific": {} 00:18:26.398 } 00:18:26.398 ] 00:18:26.398 16:57:15 -- common/autotest_common.sh@905 -- # return 0 00:18:26.398 16:57:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:26.398 16:57:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:26.398 16:57:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:26.398 16:57:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:26.398 16:57:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:26.398 16:57:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:26.398 16:57:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:26.398 16:57:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:26.398 16:57:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:26.398 16:57:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:26.398 16:57:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:26.398 16:57:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:26.398 16:57:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.398 16:57:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.398 16:57:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:26.398 "name": "Existed_Raid", 00:18:26.398 "uuid": "1747c796-feb2-43f9-975f-5840edeef94d", 00:18:26.398 "strip_size_kb": 64, 00:18:26.398 "state": "configuring", 00:18:26.398 "raid_level": "raid0", 00:18:26.398 "superblock": true, 00:18:26.398 "num_base_bdevs": 4, 00:18:26.398 "num_base_bdevs_discovered": 2, 00:18:26.398 "num_base_bdevs_operational": 4, 00:18:26.398 "base_bdevs_list": [ 00:18:26.398 { 00:18:26.398 "name": "BaseBdev1", 00:18:26.398 "uuid": "6c230df8-d4eb-46f3-9d8e-97951a6991f1", 00:18:26.398 "is_configured": true, 00:18:26.398 "data_offset": 2048, 00:18:26.398 "data_size": 63488 00:18:26.398 }, 00:18:26.398 { 00:18:26.398 "name": "BaseBdev2", 00:18:26.398 "uuid": "05c8cbc7-30a0-4554-b1d5-1372d87fe222", 00:18:26.398 "is_configured": true, 00:18:26.398 "data_offset": 2048, 00:18:26.398 "data_size": 63488 00:18:26.398 }, 00:18:26.398 { 00:18:26.398 "name": "BaseBdev3", 00:18:26.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.398 "is_configured": false, 00:18:26.398 "data_offset": 0, 00:18:26.398 "data_size": 0 00:18:26.398 }, 00:18:26.398 { 00:18:26.398 "name": "BaseBdev4", 00:18:26.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.398 "is_configured": false, 00:18:26.398 "data_offset": 0, 00:18:26.398 "data_size": 0 00:18:26.398 } 00:18:26.398 ] 00:18:26.398 }' 00:18:26.398 16:57:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:26.398 16:57:15 -- common/autotest_common.sh@10 -- # set +x 00:18:27.336 16:57:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:27.336 [2024-11-05 16:57:16.126449] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:27.336 BaseBdev3 00:18:27.336 16:57:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:27.336 16:57:16 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:27.336 16:57:16 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:27.336 16:57:16 -- common/autotest_common.sh@899 -- # local i 00:18:27.336 16:57:16 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:27.336 16:57:16 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:27.336 16:57:16 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:27.595 16:57:16 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:27.854 [ 00:18:27.854 { 00:18:27.854 "name": "BaseBdev3", 00:18:27.854 "aliases": [ 00:18:27.854 "f64c9429-2bdf-496e-b7dd-833d494982c1" 00:18:27.854 ], 00:18:27.854 "product_name": "Malloc disk", 00:18:27.854 "block_size": 512, 00:18:27.854 "num_blocks": 65536, 00:18:27.854 "uuid": "f64c9429-2bdf-496e-b7dd-833d494982c1", 00:18:27.854 "assigned_rate_limits": { 00:18:27.854 "rw_ios_per_sec": 0, 00:18:27.854 "rw_mbytes_per_sec": 0, 00:18:27.854 "r_mbytes_per_sec": 0, 00:18:27.854 "w_mbytes_per_sec": 0 00:18:27.854 }, 00:18:27.854 "claimed": true, 00:18:27.854 "claim_type": "exclusive_write", 00:18:27.854 "zoned": false, 00:18:27.854 "supported_io_types": { 00:18:27.854 "read": true, 00:18:27.854 "write": true, 00:18:27.854 "unmap": true, 00:18:27.854 "write_zeroes": true, 00:18:27.854 "flush": true, 00:18:27.854 "reset": true, 00:18:27.854 "compare": false, 00:18:27.854 "compare_and_write": false, 00:18:27.854 "abort": true, 00:18:27.854 "nvme_admin": false, 00:18:27.854 "nvme_io": false 00:18:27.854 }, 00:18:27.854 "memory_domains": [ 00:18:27.854 { 00:18:27.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.854 "dma_device_type": 2 00:18:27.854 } 00:18:27.854 ], 00:18:27.854 "driver_specific": {} 00:18:27.854 } 00:18:27.854 ] 00:18:27.854 16:57:16 -- common/autotest_common.sh@905 -- # return 0 00:18:27.854 16:57:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:27.854 16:57:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:27.854 16:57:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:27.854 16:57:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:27.854 16:57:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:27.854 16:57:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:27.854 16:57:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:27.854 16:57:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:27.854 16:57:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:27.854 16:57:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:27.854 16:57:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:27.854 16:57:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:27.854 16:57:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.854 16:57:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.113 16:57:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:28.113 "name": "Existed_Raid", 00:18:28.113 "uuid": "1747c796-feb2-43f9-975f-5840edeef94d", 00:18:28.113 "strip_size_kb": 64, 00:18:28.113 "state": "configuring", 00:18:28.113 "raid_level": "raid0", 00:18:28.113 "superblock": true, 00:18:28.113 "num_base_bdevs": 4, 00:18:28.113 "num_base_bdevs_discovered": 3, 00:18:28.113 "num_base_bdevs_operational": 4, 00:18:28.113 "base_bdevs_list": [ 00:18:28.113 { 00:18:28.113 "name": "BaseBdev1", 00:18:28.113 "uuid": "6c230df8-d4eb-46f3-9d8e-97951a6991f1", 00:18:28.113 "is_configured": true, 00:18:28.113 "data_offset": 2048, 00:18:28.113 "data_size": 63488 00:18:28.113 }, 00:18:28.113 { 00:18:28.113 "name": "BaseBdev2", 00:18:28.113 "uuid": "05c8cbc7-30a0-4554-b1d5-1372d87fe222", 00:18:28.113 "is_configured": true, 00:18:28.113 "data_offset": 2048, 00:18:28.113 "data_size": 63488 00:18:28.113 }, 00:18:28.113 { 00:18:28.113 "name": "BaseBdev3", 00:18:28.113 "uuid": "f64c9429-2bdf-496e-b7dd-833d494982c1", 00:18:28.113 "is_configured": true, 00:18:28.113 "data_offset": 2048, 00:18:28.113 "data_size": 63488 00:18:28.113 }, 00:18:28.113 { 00:18:28.113 "name": "BaseBdev4", 00:18:28.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.113 "is_configured": false, 00:18:28.113 "data_offset": 0, 00:18:28.113 "data_size": 0 00:18:28.113 } 00:18:28.113 ] 00:18:28.113 }' 00:18:28.113 16:57:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:28.113 16:57:16 -- common/autotest_common.sh@10 -- # set +x 00:18:28.681 16:57:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:28.940 [2024-11-05 16:57:17.652133] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:28.940 [2024-11-05 16:57:17.652416] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:18:28.940 [2024-11-05 16:57:17.652430] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:28.940 [2024-11-05 16:57:17.652533] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:28.940 [2024-11-05 16:57:17.652939] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:18:28.940 [2024-11-05 16:57:17.652964] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:18:28.940 [2024-11-05 16:57:17.653153] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.940 BaseBdev4 00:18:28.940 16:57:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:28.940 16:57:17 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:28.940 16:57:17 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:28.940 16:57:17 -- common/autotest_common.sh@899 -- # local i 00:18:28.940 16:57:17 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:28.940 16:57:17 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:28.940 16:57:17 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:29.198 16:57:17 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:29.457 [ 00:18:29.457 { 00:18:29.457 "name": "BaseBdev4", 00:18:29.457 "aliases": [ 00:18:29.457 "d31c68d8-a6df-41c1-ba3c-8fd431be353e" 00:18:29.457 ], 00:18:29.457 "product_name": "Malloc disk", 00:18:29.457 "block_size": 512, 00:18:29.457 "num_blocks": 65536, 00:18:29.457 "uuid": "d31c68d8-a6df-41c1-ba3c-8fd431be353e", 00:18:29.457 "assigned_rate_limits": { 00:18:29.457 "rw_ios_per_sec": 0, 00:18:29.457 "rw_mbytes_per_sec": 0, 00:18:29.457 "r_mbytes_per_sec": 0, 00:18:29.457 "w_mbytes_per_sec": 0 00:18:29.457 }, 00:18:29.457 "claimed": true, 00:18:29.457 "claim_type": "exclusive_write", 00:18:29.457 "zoned": false, 00:18:29.457 "supported_io_types": { 00:18:29.457 "read": true, 00:18:29.457 "write": true, 00:18:29.457 "unmap": true, 00:18:29.457 "write_zeroes": true, 00:18:29.457 "flush": true, 00:18:29.457 "reset": true, 00:18:29.457 "compare": false, 00:18:29.457 "compare_and_write": false, 00:18:29.457 "abort": true, 00:18:29.457 "nvme_admin": false, 00:18:29.457 "nvme_io": false 00:18:29.457 }, 00:18:29.457 "memory_domains": [ 00:18:29.457 { 00:18:29.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.457 "dma_device_type": 2 00:18:29.457 } 00:18:29.457 ], 00:18:29.457 "driver_specific": {} 00:18:29.457 } 00:18:29.457 ] 00:18:29.457 16:57:18 -- common/autotest_common.sh@905 -- # return 0 00:18:29.457 16:57:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:29.457 16:57:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:29.457 16:57:18 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:29.457 16:57:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:29.457 16:57:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:29.457 16:57:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:29.457 16:57:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:29.457 16:57:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:29.457 16:57:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:29.457 16:57:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:29.457 16:57:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:29.457 16:57:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:29.457 16:57:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.457 16:57:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.717 16:57:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:29.717 "name": "Existed_Raid", 00:18:29.717 "uuid": "1747c796-feb2-43f9-975f-5840edeef94d", 00:18:29.717 "strip_size_kb": 64, 00:18:29.717 "state": "online", 00:18:29.717 "raid_level": "raid0", 00:18:29.717 "superblock": true, 00:18:29.717 "num_base_bdevs": 4, 00:18:29.717 "num_base_bdevs_discovered": 4, 00:18:29.717 "num_base_bdevs_operational": 4, 00:18:29.717 "base_bdevs_list": [ 00:18:29.717 { 00:18:29.717 "name": "BaseBdev1", 00:18:29.717 "uuid": "6c230df8-d4eb-46f3-9d8e-97951a6991f1", 00:18:29.717 "is_configured": true, 00:18:29.717 "data_offset": 2048, 00:18:29.717 "data_size": 63488 00:18:29.717 }, 00:18:29.717 { 00:18:29.717 "name": "BaseBdev2", 00:18:29.717 "uuid": "05c8cbc7-30a0-4554-b1d5-1372d87fe222", 00:18:29.717 "is_configured": true, 00:18:29.717 "data_offset": 2048, 00:18:29.717 "data_size": 63488 00:18:29.717 }, 00:18:29.717 { 00:18:29.717 "name": "BaseBdev3", 00:18:29.717 "uuid": "f64c9429-2bdf-496e-b7dd-833d494982c1", 00:18:29.717 "is_configured": true, 00:18:29.717 "data_offset": 2048, 00:18:29.717 "data_size": 63488 00:18:29.717 }, 00:18:29.717 { 00:18:29.717 "name": "BaseBdev4", 00:18:29.717 "uuid": "d31c68d8-a6df-41c1-ba3c-8fd431be353e", 00:18:29.717 "is_configured": true, 00:18:29.717 "data_offset": 2048, 00:18:29.717 "data_size": 63488 00:18:29.717 } 00:18:29.717 ] 00:18:29.717 }' 00:18:29.717 16:57:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:29.717 16:57:18 -- common/autotest_common.sh@10 -- # set +x 00:18:30.286 16:57:18 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:30.545 [2024-11-05 16:57:19.197680] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:30.545 [2024-11-05 16:57:19.197731] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.545 [2024-11-05 16:57:19.197828] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.545 16:57:19 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:30.545 16:57:19 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:30.545 16:57:19 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:30.545 16:57:19 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:30.545 16:57:19 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:30.545 16:57:19 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:30.545 16:57:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:30.545 16:57:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:30.545 16:57:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:30.545 16:57:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:30.545 16:57:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:30.545 16:57:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:30.545 16:57:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:30.545 16:57:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:30.545 16:57:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:30.545 16:57:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.545 16:57:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.804 16:57:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.804 "name": "Existed_Raid", 00:18:30.804 "uuid": "1747c796-feb2-43f9-975f-5840edeef94d", 00:18:30.804 "strip_size_kb": 64, 00:18:30.804 "state": "offline", 00:18:30.804 "raid_level": "raid0", 00:18:30.804 "superblock": true, 00:18:30.804 "num_base_bdevs": 4, 00:18:30.804 "num_base_bdevs_discovered": 3, 00:18:30.804 "num_base_bdevs_operational": 3, 00:18:30.804 "base_bdevs_list": [ 00:18:30.804 { 00:18:30.804 "name": null, 00:18:30.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.804 "is_configured": false, 00:18:30.804 "data_offset": 2048, 00:18:30.804 "data_size": 63488 00:18:30.804 }, 00:18:30.804 { 00:18:30.804 "name": "BaseBdev2", 00:18:30.804 "uuid": "05c8cbc7-30a0-4554-b1d5-1372d87fe222", 00:18:30.804 "is_configured": true, 00:18:30.804 "data_offset": 2048, 00:18:30.804 "data_size": 63488 00:18:30.804 }, 00:18:30.804 { 00:18:30.804 "name": "BaseBdev3", 00:18:30.804 "uuid": "f64c9429-2bdf-496e-b7dd-833d494982c1", 00:18:30.804 "is_configured": true, 00:18:30.804 "data_offset": 2048, 00:18:30.804 "data_size": 63488 00:18:30.804 }, 00:18:30.804 { 00:18:30.804 "name": "BaseBdev4", 00:18:30.804 "uuid": "d31c68d8-a6df-41c1-ba3c-8fd431be353e", 00:18:30.804 "is_configured": true, 00:18:30.804 "data_offset": 2048, 00:18:30.804 "data_size": 63488 00:18:30.804 } 00:18:30.804 ] 00:18:30.804 }' 00:18:30.804 16:57:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.804 16:57:19 -- common/autotest_common.sh@10 -- # set +x 00:18:31.372 16:57:20 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:31.372 16:57:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:31.372 16:57:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.372 16:57:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:31.631 16:57:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:31.631 16:57:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:31.631 16:57:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:31.890 [2024-11-05 16:57:20.675781] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:31.890 16:57:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:31.890 16:57:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:31.890 16:57:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.890 16:57:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:32.148 16:57:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:32.149 16:57:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:32.149 16:57:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:32.419 [2024-11-05 16:57:21.127735] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:32.419 16:57:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:32.419 16:57:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:32.419 16:57:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.419 16:57:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:32.700 16:57:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:32.700 16:57:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:32.700 16:57:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:32.959 [2024-11-05 16:57:21.639691] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:32.959 [2024-11-05 16:57:21.639761] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:18:32.959 16:57:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:32.959 16:57:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:32.959 16:57:21 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.959 16:57:21 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:33.218 16:57:21 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:33.218 16:57:21 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:33.218 16:57:21 -- bdev/bdev_raid.sh@287 -- # killprocess 118885 00:18:33.218 16:57:21 -- common/autotest_common.sh@936 -- # '[' -z 118885 ']' 00:18:33.218 16:57:21 -- common/autotest_common.sh@940 -- # kill -0 118885 00:18:33.218 16:57:21 -- common/autotest_common.sh@941 -- # uname 00:18:33.218 16:57:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:33.218 16:57:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 118885 00:18:33.218 16:57:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:33.218 16:57:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:33.218 16:57:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 118885' 00:18:33.218 killing process with pid 118885 00:18:33.218 16:57:21 -- common/autotest_common.sh@955 -- # kill 118885 00:18:33.218 [2024-11-05 16:57:21.934513] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:33.218 16:57:21 -- common/autotest_common.sh@960 -- # wait 118885 00:18:33.218 [2024-11-05 16:57:21.934638] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:34.155 00:18:34.155 real 0m14.999s 00:18:34.155 user 0m26.825s 00:18:34.155 sys 0m1.688s 00:18:34.155 16:57:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:34.155 16:57:22 -- common/autotest_common.sh@10 -- # set +x 00:18:34.155 ************************************ 00:18:34.155 END TEST raid_state_function_test_sb 00:18:34.155 ************************************ 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:18:34.155 16:57:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:34.155 16:57:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:34.155 16:57:22 -- common/autotest_common.sh@10 -- # set +x 00:18:34.155 ************************************ 00:18:34.155 START TEST raid_superblock_test 00:18:34.155 ************************************ 00:18:34.155 16:57:22 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 4 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@357 -- # raid_pid=119334 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:34.155 16:57:22 -- bdev/bdev_raid.sh@358 -- # waitforlisten 119334 /var/tmp/spdk-raid.sock 00:18:34.155 16:57:22 -- common/autotest_common.sh@829 -- # '[' -z 119334 ']' 00:18:34.155 16:57:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:34.155 16:57:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.155 16:57:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:34.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:34.155 16:57:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.155 16:57:22 -- common/autotest_common.sh@10 -- # set +x 00:18:34.155 [2024-11-05 16:57:22.989193] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:34.155 [2024-11-05 16:57:22.989367] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119334 ] 00:18:34.414 [2024-11-05 16:57:23.161227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.672 [2024-11-05 16:57:23.365566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.672 [2024-11-05 16:57:23.531527] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:35.240 16:57:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:35.240 16:57:23 -- common/autotest_common.sh@862 -- # return 0 00:18:35.240 16:57:23 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:35.240 16:57:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:35.240 16:57:23 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:35.240 16:57:23 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:35.240 16:57:23 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:35.240 16:57:23 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.240 16:57:23 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.240 16:57:23 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.240 16:57:23 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:35.498 malloc1 00:18:35.498 16:57:24 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:35.756 [2024-11-05 16:57:24.406554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:35.756 [2024-11-05 16:57:24.406675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.756 [2024-11-05 16:57:24.406708] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:18:35.756 [2024-11-05 16:57:24.406755] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.756 [2024-11-05 16:57:24.409230] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.756 [2024-11-05 16:57:24.409298] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:35.756 pt1 00:18:35.756 16:57:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:35.756 16:57:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:35.756 16:57:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:35.756 16:57:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:35.756 16:57:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:35.756 16:57:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.756 16:57:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.756 16:57:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.756 16:57:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:35.756 malloc2 00:18:36.015 16:57:24 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:36.015 [2024-11-05 16:57:24.845687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:36.015 [2024-11-05 16:57:24.845807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.015 [2024-11-05 16:57:24.845848] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:36.015 [2024-11-05 16:57:24.845900] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.015 [2024-11-05 16:57:24.848223] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.015 [2024-11-05 16:57:24.848287] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:36.015 pt2 00:18:36.015 16:57:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:36.015 16:57:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:36.015 16:57:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:36.015 16:57:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:36.015 16:57:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:36.015 16:57:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:36.015 16:57:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:36.015 16:57:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:36.015 16:57:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:36.274 malloc3 00:18:36.274 16:57:25 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:36.533 [2024-11-05 16:57:25.327491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:36.533 [2024-11-05 16:57:25.327598] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.533 [2024-11-05 16:57:25.327643] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:36.533 [2024-11-05 16:57:25.327688] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.533 [2024-11-05 16:57:25.329995] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.533 [2024-11-05 16:57:25.330067] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:36.533 pt3 00:18:36.533 16:57:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:36.533 16:57:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:36.533 16:57:25 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:36.533 16:57:25 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:36.533 16:57:25 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:36.533 16:57:25 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:36.533 16:57:25 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:36.533 16:57:25 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:36.533 16:57:25 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:36.791 malloc4 00:18:36.791 16:57:25 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:37.050 [2024-11-05 16:57:25.752853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:37.050 [2024-11-05 16:57:25.752966] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.050 [2024-11-05 16:57:25.752999] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:37.050 [2024-11-05 16:57:25.753056] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.050 [2024-11-05 16:57:25.755516] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.050 [2024-11-05 16:57:25.755582] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:37.050 pt4 00:18:37.050 16:57:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:37.050 16:57:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:37.050 16:57:25 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:37.050 [2024-11-05 16:57:25.948947] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:37.309 [2024-11-05 16:57:25.950978] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:37.309 [2024-11-05 16:57:25.951089] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:37.309 [2024-11-05 16:57:25.951167] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:37.309 [2024-11-05 16:57:25.951421] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:18:37.309 [2024-11-05 16:57:25.951444] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:37.309 [2024-11-05 16:57:25.951561] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:37.309 [2024-11-05 16:57:25.951951] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:18:37.309 [2024-11-05 16:57:25.951975] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:18:37.309 [2024-11-05 16:57:25.952132] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.309 16:57:25 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:37.309 16:57:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:37.309 16:57:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:37.309 16:57:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:37.309 16:57:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:37.309 16:57:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:37.309 16:57:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:37.309 16:57:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:37.309 16:57:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:37.309 16:57:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:37.309 16:57:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.309 16:57:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.309 16:57:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:37.309 "name": "raid_bdev1", 00:18:37.309 "uuid": "8fe8c986-68cd-4fe0-bf5f-0f6038c8d7f3", 00:18:37.309 "strip_size_kb": 64, 00:18:37.309 "state": "online", 00:18:37.309 "raid_level": "raid0", 00:18:37.309 "superblock": true, 00:18:37.309 "num_base_bdevs": 4, 00:18:37.309 "num_base_bdevs_discovered": 4, 00:18:37.309 "num_base_bdevs_operational": 4, 00:18:37.309 "base_bdevs_list": [ 00:18:37.309 { 00:18:37.309 "name": "pt1", 00:18:37.309 "uuid": "84b6a371-a32d-52be-803e-c62eba4ea7ab", 00:18:37.309 "is_configured": true, 00:18:37.309 "data_offset": 2048, 00:18:37.309 "data_size": 63488 00:18:37.309 }, 00:18:37.309 { 00:18:37.309 "name": "pt2", 00:18:37.309 "uuid": "00b8400b-f500-531e-869e-07bf70cc76c1", 00:18:37.309 "is_configured": true, 00:18:37.309 "data_offset": 2048, 00:18:37.309 "data_size": 63488 00:18:37.309 }, 00:18:37.309 { 00:18:37.309 "name": "pt3", 00:18:37.309 "uuid": "10593af9-dcdd-518a-bc5d-0257429a396b", 00:18:37.309 "is_configured": true, 00:18:37.309 "data_offset": 2048, 00:18:37.309 "data_size": 63488 00:18:37.309 }, 00:18:37.309 { 00:18:37.309 "name": "pt4", 00:18:37.309 "uuid": "1a783d98-f769-5bd8-8f30-d1686f4db663", 00:18:37.309 "is_configured": true, 00:18:37.309 "data_offset": 2048, 00:18:37.309 "data_size": 63488 00:18:37.309 } 00:18:37.309 ] 00:18:37.309 }' 00:18:37.309 16:57:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:37.309 16:57:26 -- common/autotest_common.sh@10 -- # set +x 00:18:37.877 16:57:26 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:37.877 16:57:26 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:38.135 [2024-11-05 16:57:26.953316] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.135 16:57:26 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=8fe8c986-68cd-4fe0-bf5f-0f6038c8d7f3 00:18:38.135 16:57:26 -- bdev/bdev_raid.sh@380 -- # '[' -z 8fe8c986-68cd-4fe0-bf5f-0f6038c8d7f3 ']' 00:18:38.135 16:57:26 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:38.394 [2024-11-05 16:57:27.209154] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.394 [2024-11-05 16:57:27.209203] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.394 [2024-11-05 16:57:27.209297] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.394 [2024-11-05 16:57:27.209371] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.394 [2024-11-05 16:57:27.209383] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:18:38.394 16:57:27 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.394 16:57:27 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:38.653 16:57:27 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:38.653 16:57:27 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:38.653 16:57:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:38.653 16:57:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:38.912 16:57:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:38.912 16:57:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:39.171 16:57:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.171 16:57:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:39.430 16:57:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.430 16:57:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:39.430 16:57:28 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:39.430 16:57:28 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:39.688 16:57:28 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:39.688 16:57:28 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:39.688 16:57:28 -- common/autotest_common.sh@650 -- # local es=0 00:18:39.688 16:57:28 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:39.688 16:57:28 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.688 16:57:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.688 16:57:28 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.688 16:57:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.688 16:57:28 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.688 16:57:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.688 16:57:28 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.688 16:57:28 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:39.688 16:57:28 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:39.994 [2024-11-05 16:57:28.625428] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:39.994 [2024-11-05 16:57:28.627134] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:39.994 [2024-11-05 16:57:28.627186] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:39.994 [2024-11-05 16:57:28.627228] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:39.994 [2024-11-05 16:57:28.627275] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:39.994 [2024-11-05 16:57:28.627364] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:39.994 [2024-11-05 16:57:28.627417] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:39.994 [2024-11-05 16:57:28.627511] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:39.994 [2024-11-05 16:57:28.627540] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:39.994 [2024-11-05 16:57:28.627551] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:18:39.994 request: 00:18:39.994 { 00:18:39.994 "name": "raid_bdev1", 00:18:39.994 "raid_level": "raid0", 00:18:39.994 "base_bdevs": [ 00:18:39.994 "malloc1", 00:18:39.994 "malloc2", 00:18:39.994 "malloc3", 00:18:39.994 "malloc4" 00:18:39.994 ], 00:18:39.994 "superblock": false, 00:18:39.994 "strip_size_kb": 64, 00:18:39.994 "method": "bdev_raid_create", 00:18:39.994 "req_id": 1 00:18:39.994 } 00:18:39.994 Got JSON-RPC error response 00:18:39.994 response: 00:18:39.994 { 00:18:39.994 "code": -17, 00:18:39.994 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:39.994 } 00:18:39.994 16:57:28 -- common/autotest_common.sh@653 -- # es=1 00:18:39.994 16:57:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.994 16:57:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.994 16:57:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.994 16:57:28 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.994 16:57:28 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:40.257 16:57:28 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:40.257 16:57:28 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:40.257 16:57:28 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:40.257 [2024-11-05 16:57:29.053482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:40.257 [2024-11-05 16:57:29.053560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.257 [2024-11-05 16:57:29.053593] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:40.257 [2024-11-05 16:57:29.053636] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.257 [2024-11-05 16:57:29.055993] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.257 [2024-11-05 16:57:29.056079] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:40.257 [2024-11-05 16:57:29.056192] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:40.257 [2024-11-05 16:57:29.056248] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:40.257 pt1 00:18:40.257 16:57:29 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:40.257 16:57:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:40.257 16:57:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:40.257 16:57:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:40.257 16:57:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:40.257 16:57:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:40.257 16:57:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:40.257 16:57:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:40.257 16:57:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:40.257 16:57:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:40.257 16:57:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.257 16:57:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.515 16:57:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:40.515 "name": "raid_bdev1", 00:18:40.515 "uuid": "8fe8c986-68cd-4fe0-bf5f-0f6038c8d7f3", 00:18:40.515 "strip_size_kb": 64, 00:18:40.515 "state": "configuring", 00:18:40.515 "raid_level": "raid0", 00:18:40.515 "superblock": true, 00:18:40.515 "num_base_bdevs": 4, 00:18:40.515 "num_base_bdevs_discovered": 1, 00:18:40.515 "num_base_bdevs_operational": 4, 00:18:40.515 "base_bdevs_list": [ 00:18:40.515 { 00:18:40.515 "name": "pt1", 00:18:40.515 "uuid": "84b6a371-a32d-52be-803e-c62eba4ea7ab", 00:18:40.515 "is_configured": true, 00:18:40.515 "data_offset": 2048, 00:18:40.515 "data_size": 63488 00:18:40.516 }, 00:18:40.516 { 00:18:40.516 "name": null, 00:18:40.516 "uuid": "00b8400b-f500-531e-869e-07bf70cc76c1", 00:18:40.516 "is_configured": false, 00:18:40.516 "data_offset": 2048, 00:18:40.516 "data_size": 63488 00:18:40.516 }, 00:18:40.516 { 00:18:40.516 "name": null, 00:18:40.516 "uuid": "10593af9-dcdd-518a-bc5d-0257429a396b", 00:18:40.516 "is_configured": false, 00:18:40.516 "data_offset": 2048, 00:18:40.516 "data_size": 63488 00:18:40.516 }, 00:18:40.516 { 00:18:40.516 "name": null, 00:18:40.516 "uuid": "1a783d98-f769-5bd8-8f30-d1686f4db663", 00:18:40.516 "is_configured": false, 00:18:40.516 "data_offset": 2048, 00:18:40.516 "data_size": 63488 00:18:40.516 } 00:18:40.516 ] 00:18:40.516 }' 00:18:40.516 16:57:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:40.516 16:57:29 -- common/autotest_common.sh@10 -- # set +x 00:18:41.082 16:57:29 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:41.082 16:57:29 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:41.340 [2024-11-05 16:57:30.121837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:41.340 [2024-11-05 16:57:30.121947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.340 [2024-11-05 16:57:30.121989] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:41.340 [2024-11-05 16:57:30.122011] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.340 [2024-11-05 16:57:30.122549] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.340 [2024-11-05 16:57:30.122622] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:41.340 [2024-11-05 16:57:30.122744] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:41.340 [2024-11-05 16:57:30.122769] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.340 pt2 00:18:41.340 16:57:30 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:41.599 [2024-11-05 16:57:30.377873] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:41.599 16:57:30 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:41.599 16:57:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:41.599 16:57:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:41.599 16:57:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:41.599 16:57:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:41.599 16:57:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:41.599 16:57:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.599 16:57:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.599 16:57:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.599 16:57:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.599 16:57:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.599 16:57:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.858 16:57:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.858 "name": "raid_bdev1", 00:18:41.858 "uuid": "8fe8c986-68cd-4fe0-bf5f-0f6038c8d7f3", 00:18:41.858 "strip_size_kb": 64, 00:18:41.858 "state": "configuring", 00:18:41.858 "raid_level": "raid0", 00:18:41.858 "superblock": true, 00:18:41.858 "num_base_bdevs": 4, 00:18:41.858 "num_base_bdevs_discovered": 1, 00:18:41.858 "num_base_bdevs_operational": 4, 00:18:41.858 "base_bdevs_list": [ 00:18:41.858 { 00:18:41.858 "name": "pt1", 00:18:41.858 "uuid": "84b6a371-a32d-52be-803e-c62eba4ea7ab", 00:18:41.858 "is_configured": true, 00:18:41.858 "data_offset": 2048, 00:18:41.858 "data_size": 63488 00:18:41.858 }, 00:18:41.858 { 00:18:41.858 "name": null, 00:18:41.858 "uuid": "00b8400b-f500-531e-869e-07bf70cc76c1", 00:18:41.858 "is_configured": false, 00:18:41.858 "data_offset": 2048, 00:18:41.858 "data_size": 63488 00:18:41.858 }, 00:18:41.858 { 00:18:41.858 "name": null, 00:18:41.858 "uuid": "10593af9-dcdd-518a-bc5d-0257429a396b", 00:18:41.858 "is_configured": false, 00:18:41.858 "data_offset": 2048, 00:18:41.858 "data_size": 63488 00:18:41.858 }, 00:18:41.858 { 00:18:41.858 "name": null, 00:18:41.858 "uuid": "1a783d98-f769-5bd8-8f30-d1686f4db663", 00:18:41.858 "is_configured": false, 00:18:41.858 "data_offset": 2048, 00:18:41.858 "data_size": 63488 00:18:41.858 } 00:18:41.858 ] 00:18:41.858 }' 00:18:41.858 16:57:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.858 16:57:30 -- common/autotest_common.sh@10 -- # set +x 00:18:42.426 16:57:31 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:42.426 16:57:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:42.426 16:57:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:42.685 [2024-11-05 16:57:31.386085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:42.685 [2024-11-05 16:57:31.386172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.685 [2024-11-05 16:57:31.386209] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:42.685 [2024-11-05 16:57:31.386230] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.685 [2024-11-05 16:57:31.386761] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.685 [2024-11-05 16:57:31.386839] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:42.685 [2024-11-05 16:57:31.386963] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:42.685 [2024-11-05 16:57:31.387004] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:42.685 pt2 00:18:42.685 16:57:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:42.685 16:57:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:42.685 16:57:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:42.944 [2024-11-05 16:57:31.634128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:42.944 [2024-11-05 16:57:31.634204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.944 [2024-11-05 16:57:31.634231] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:42.944 [2024-11-05 16:57:31.634256] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.944 [2024-11-05 16:57:31.634687] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.944 [2024-11-05 16:57:31.634751] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:42.944 [2024-11-05 16:57:31.634834] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:42.944 [2024-11-05 16:57:31.634855] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:42.944 pt3 00:18:42.944 16:57:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:42.944 16:57:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:42.944 16:57:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:42.944 [2024-11-05 16:57:31.822172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:42.944 [2024-11-05 16:57:31.822252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.944 [2024-11-05 16:57:31.822286] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:42.944 [2024-11-05 16:57:31.822309] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.944 [2024-11-05 16:57:31.822711] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.944 [2024-11-05 16:57:31.822770] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:42.944 [2024-11-05 16:57:31.822861] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:42.944 [2024-11-05 16:57:31.822915] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:42.944 [2024-11-05 16:57:31.823059] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:18:42.944 [2024-11-05 16:57:31.823073] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:42.944 [2024-11-05 16:57:31.823175] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:42.944 [2024-11-05 16:57:31.823514] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:18:42.944 [2024-11-05 16:57:31.823538] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:18:42.944 [2024-11-05 16:57:31.823666] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.944 pt4 00:18:42.944 16:57:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:42.944 16:57:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:42.944 16:57:31 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:42.944 16:57:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:42.944 16:57:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:42.944 16:57:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:42.944 16:57:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:42.944 16:57:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:42.944 16:57:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:42.944 16:57:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:42.944 16:57:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:42.944 16:57:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:42.944 16:57:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.944 16:57:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.203 16:57:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:43.203 "name": "raid_bdev1", 00:18:43.203 "uuid": "8fe8c986-68cd-4fe0-bf5f-0f6038c8d7f3", 00:18:43.203 "strip_size_kb": 64, 00:18:43.203 "state": "online", 00:18:43.203 "raid_level": "raid0", 00:18:43.203 "superblock": true, 00:18:43.203 "num_base_bdevs": 4, 00:18:43.203 "num_base_bdevs_discovered": 4, 00:18:43.203 "num_base_bdevs_operational": 4, 00:18:43.203 "base_bdevs_list": [ 00:18:43.203 { 00:18:43.203 "name": "pt1", 00:18:43.203 "uuid": "84b6a371-a32d-52be-803e-c62eba4ea7ab", 00:18:43.203 "is_configured": true, 00:18:43.203 "data_offset": 2048, 00:18:43.203 "data_size": 63488 00:18:43.203 }, 00:18:43.203 { 00:18:43.203 "name": "pt2", 00:18:43.203 "uuid": "00b8400b-f500-531e-869e-07bf70cc76c1", 00:18:43.203 "is_configured": true, 00:18:43.203 "data_offset": 2048, 00:18:43.203 "data_size": 63488 00:18:43.203 }, 00:18:43.203 { 00:18:43.203 "name": "pt3", 00:18:43.203 "uuid": "10593af9-dcdd-518a-bc5d-0257429a396b", 00:18:43.203 "is_configured": true, 00:18:43.203 "data_offset": 2048, 00:18:43.203 "data_size": 63488 00:18:43.203 }, 00:18:43.203 { 00:18:43.203 "name": "pt4", 00:18:43.203 "uuid": "1a783d98-f769-5bd8-8f30-d1686f4db663", 00:18:43.203 "is_configured": true, 00:18:43.203 "data_offset": 2048, 00:18:43.203 "data_size": 63488 00:18:43.203 } 00:18:43.203 ] 00:18:43.203 }' 00:18:43.203 16:57:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:43.203 16:57:32 -- common/autotest_common.sh@10 -- # set +x 00:18:44.138 16:57:32 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:44.138 16:57:32 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:44.138 [2024-11-05 16:57:32.870585] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.138 16:57:32 -- bdev/bdev_raid.sh@430 -- # '[' 8fe8c986-68cd-4fe0-bf5f-0f6038c8d7f3 '!=' 8fe8c986-68cd-4fe0-bf5f-0f6038c8d7f3 ']' 00:18:44.138 16:57:32 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:18:44.138 16:57:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:44.138 16:57:32 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:44.138 16:57:32 -- bdev/bdev_raid.sh@511 -- # killprocess 119334 00:18:44.138 16:57:32 -- common/autotest_common.sh@936 -- # '[' -z 119334 ']' 00:18:44.138 16:57:32 -- common/autotest_common.sh@940 -- # kill -0 119334 00:18:44.138 16:57:32 -- common/autotest_common.sh@941 -- # uname 00:18:44.138 16:57:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:44.138 16:57:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119334 00:18:44.138 16:57:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:44.138 16:57:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:44.138 16:57:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119334' 00:18:44.138 killing process with pid 119334 00:18:44.138 16:57:32 -- common/autotest_common.sh@955 -- # kill 119334 00:18:44.138 [2024-11-05 16:57:32.913453] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:44.138 [2024-11-05 16:57:32.913630] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.138 16:57:32 -- common/autotest_common.sh@960 -- # wait 119334 00:18:44.138 [2024-11-05 16:57:32.913849] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.138 [2024-11-05 16:57:32.913971] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:18:44.397 [2024-11-05 16:57:33.189621] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:45.331 ************************************ 00:18:45.331 END TEST raid_superblock_test 00:18:45.331 ************************************ 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:45.331 00:18:45.331 real 0m11.176s 00:18:45.331 user 0m19.451s 00:18:45.331 sys 0m1.407s 00:18:45.331 16:57:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:45.331 16:57:34 -- common/autotest_common.sh@10 -- # set +x 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:18:45.331 16:57:34 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:45.331 16:57:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:45.331 16:57:34 -- common/autotest_common.sh@10 -- # set +x 00:18:45.331 ************************************ 00:18:45.331 START TEST raid_state_function_test 00:18:45.331 ************************************ 00:18:45.331 16:57:34 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 false 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@226 -- # raid_pid=119662 00:18:45.331 Process raid pid: 119662 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119662' 00:18:45.331 16:57:34 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119662 /var/tmp/spdk-raid.sock 00:18:45.331 16:57:34 -- common/autotest_common.sh@829 -- # '[' -z 119662 ']' 00:18:45.331 16:57:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:45.331 16:57:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:45.331 16:57:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:45.331 16:57:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.331 16:57:34 -- common/autotest_common.sh@10 -- # set +x 00:18:45.331 [2024-11-05 16:57:34.220246] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:45.331 [2024-11-05 16:57:34.220924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.589 [2024-11-05 16:57:34.387465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.847 [2024-11-05 16:57:34.552869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.847 [2024-11-05 16:57:34.720731] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.414 16:57:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.414 16:57:35 -- common/autotest_common.sh@862 -- # return 0 00:18:46.414 16:57:35 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:46.672 [2024-11-05 16:57:35.340052] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:46.672 [2024-11-05 16:57:35.340120] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:46.672 [2024-11-05 16:57:35.340149] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:46.672 [2024-11-05 16:57:35.340169] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:46.672 [2024-11-05 16:57:35.340176] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:46.672 [2024-11-05 16:57:35.340244] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:46.672 [2024-11-05 16:57:35.340253] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:46.672 [2024-11-05 16:57:35.340279] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:46.672 16:57:35 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:46.672 16:57:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:46.672 16:57:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:46.672 16:57:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:46.672 16:57:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:46.672 16:57:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:46.672 16:57:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:46.672 16:57:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:46.672 16:57:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:46.672 16:57:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:46.672 16:57:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.672 16:57:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.930 16:57:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:46.930 "name": "Existed_Raid", 00:18:46.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.930 "strip_size_kb": 64, 00:18:46.930 "state": "configuring", 00:18:46.930 "raid_level": "concat", 00:18:46.930 "superblock": false, 00:18:46.930 "num_base_bdevs": 4, 00:18:46.930 "num_base_bdevs_discovered": 0, 00:18:46.930 "num_base_bdevs_operational": 4, 00:18:46.930 "base_bdevs_list": [ 00:18:46.930 { 00:18:46.930 "name": "BaseBdev1", 00:18:46.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.930 "is_configured": false, 00:18:46.930 "data_offset": 0, 00:18:46.930 "data_size": 0 00:18:46.930 }, 00:18:46.930 { 00:18:46.930 "name": "BaseBdev2", 00:18:46.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.930 "is_configured": false, 00:18:46.930 "data_offset": 0, 00:18:46.930 "data_size": 0 00:18:46.930 }, 00:18:46.930 { 00:18:46.930 "name": "BaseBdev3", 00:18:46.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.930 "is_configured": false, 00:18:46.930 "data_offset": 0, 00:18:46.930 "data_size": 0 00:18:46.930 }, 00:18:46.930 { 00:18:46.930 "name": "BaseBdev4", 00:18:46.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.930 "is_configured": false, 00:18:46.930 "data_offset": 0, 00:18:46.930 "data_size": 0 00:18:46.930 } 00:18:46.930 ] 00:18:46.930 }' 00:18:46.930 16:57:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:46.930 16:57:35 -- common/autotest_common.sh@10 -- # set +x 00:18:47.512 16:57:36 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:47.512 [2024-11-05 16:57:36.356161] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:47.512 [2024-11-05 16:57:36.356213] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:47.512 16:57:36 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:47.783 [2024-11-05 16:57:36.612263] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:47.783 [2024-11-05 16:57:36.612327] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:47.783 [2024-11-05 16:57:36.612356] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:47.783 [2024-11-05 16:57:36.612381] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:47.783 [2024-11-05 16:57:36.612389] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:47.783 [2024-11-05 16:57:36.612442] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:47.783 [2024-11-05 16:57:36.612450] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:47.783 [2024-11-05 16:57:36.612473] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:47.783 16:57:36 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:48.041 [2024-11-05 16:57:36.834602] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:48.041 BaseBdev1 00:18:48.041 16:57:36 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:48.041 16:57:36 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:48.041 16:57:36 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:48.041 16:57:36 -- common/autotest_common.sh@899 -- # local i 00:18:48.041 16:57:36 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:48.041 16:57:36 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:48.041 16:57:36 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:48.300 16:57:37 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:48.558 [ 00:18:48.558 { 00:18:48.558 "name": "BaseBdev1", 00:18:48.558 "aliases": [ 00:18:48.558 "7b21d115-9490-4da6-8bb5-8c462a21d8e4" 00:18:48.558 ], 00:18:48.558 "product_name": "Malloc disk", 00:18:48.558 "block_size": 512, 00:18:48.558 "num_blocks": 65536, 00:18:48.558 "uuid": "7b21d115-9490-4da6-8bb5-8c462a21d8e4", 00:18:48.558 "assigned_rate_limits": { 00:18:48.558 "rw_ios_per_sec": 0, 00:18:48.558 "rw_mbytes_per_sec": 0, 00:18:48.558 "r_mbytes_per_sec": 0, 00:18:48.558 "w_mbytes_per_sec": 0 00:18:48.558 }, 00:18:48.558 "claimed": true, 00:18:48.558 "claim_type": "exclusive_write", 00:18:48.558 "zoned": false, 00:18:48.558 "supported_io_types": { 00:18:48.558 "read": true, 00:18:48.558 "write": true, 00:18:48.558 "unmap": true, 00:18:48.558 "write_zeroes": true, 00:18:48.558 "flush": true, 00:18:48.558 "reset": true, 00:18:48.558 "compare": false, 00:18:48.558 "compare_and_write": false, 00:18:48.558 "abort": true, 00:18:48.558 "nvme_admin": false, 00:18:48.558 "nvme_io": false 00:18:48.558 }, 00:18:48.558 "memory_domains": [ 00:18:48.558 { 00:18:48.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.558 "dma_device_type": 2 00:18:48.558 } 00:18:48.558 ], 00:18:48.558 "driver_specific": {} 00:18:48.558 } 00:18:48.558 ] 00:18:48.558 16:57:37 -- common/autotest_common.sh@905 -- # return 0 00:18:48.558 16:57:37 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:48.558 16:57:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:48.558 16:57:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:48.558 16:57:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:48.558 16:57:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:48.558 16:57:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:48.558 16:57:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:48.558 16:57:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:48.558 16:57:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:48.558 16:57:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:48.558 16:57:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.558 16:57:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.816 16:57:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.816 "name": "Existed_Raid", 00:18:48.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.816 "strip_size_kb": 64, 00:18:48.816 "state": "configuring", 00:18:48.816 "raid_level": "concat", 00:18:48.816 "superblock": false, 00:18:48.816 "num_base_bdevs": 4, 00:18:48.816 "num_base_bdevs_discovered": 1, 00:18:48.816 "num_base_bdevs_operational": 4, 00:18:48.816 "base_bdevs_list": [ 00:18:48.816 { 00:18:48.816 "name": "BaseBdev1", 00:18:48.816 "uuid": "7b21d115-9490-4da6-8bb5-8c462a21d8e4", 00:18:48.816 "is_configured": true, 00:18:48.816 "data_offset": 0, 00:18:48.816 "data_size": 65536 00:18:48.816 }, 00:18:48.816 { 00:18:48.816 "name": "BaseBdev2", 00:18:48.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.816 "is_configured": false, 00:18:48.816 "data_offset": 0, 00:18:48.816 "data_size": 0 00:18:48.816 }, 00:18:48.816 { 00:18:48.816 "name": "BaseBdev3", 00:18:48.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.816 "is_configured": false, 00:18:48.816 "data_offset": 0, 00:18:48.816 "data_size": 0 00:18:48.816 }, 00:18:48.816 { 00:18:48.816 "name": "BaseBdev4", 00:18:48.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.816 "is_configured": false, 00:18:48.816 "data_offset": 0, 00:18:48.816 "data_size": 0 00:18:48.816 } 00:18:48.816 ] 00:18:48.816 }' 00:18:48.816 16:57:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.816 16:57:37 -- common/autotest_common.sh@10 -- # set +x 00:18:49.382 16:57:38 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:49.640 [2024-11-05 16:57:38.367582] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:49.640 [2024-11-05 16:57:38.367653] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:49.640 16:57:38 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:49.640 16:57:38 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:49.898 [2024-11-05 16:57:38.611668] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:49.898 [2024-11-05 16:57:38.613687] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:49.898 [2024-11-05 16:57:38.613784] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:49.898 [2024-11-05 16:57:38.613814] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:49.898 [2024-11-05 16:57:38.613839] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:49.898 [2024-11-05 16:57:38.613847] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:49.898 [2024-11-05 16:57:38.613864] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:49.898 16:57:38 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:49.898 16:57:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:49.898 16:57:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:49.898 16:57:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:49.898 16:57:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:49.898 16:57:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:49.898 16:57:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:49.898 16:57:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:49.898 16:57:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:49.898 16:57:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:49.898 16:57:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:49.898 16:57:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:49.898 16:57:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.898 16:57:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.156 16:57:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:50.156 "name": "Existed_Raid", 00:18:50.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.156 "strip_size_kb": 64, 00:18:50.156 "state": "configuring", 00:18:50.156 "raid_level": "concat", 00:18:50.156 "superblock": false, 00:18:50.156 "num_base_bdevs": 4, 00:18:50.156 "num_base_bdevs_discovered": 1, 00:18:50.156 "num_base_bdevs_operational": 4, 00:18:50.156 "base_bdevs_list": [ 00:18:50.156 { 00:18:50.156 "name": "BaseBdev1", 00:18:50.156 "uuid": "7b21d115-9490-4da6-8bb5-8c462a21d8e4", 00:18:50.156 "is_configured": true, 00:18:50.156 "data_offset": 0, 00:18:50.156 "data_size": 65536 00:18:50.156 }, 00:18:50.156 { 00:18:50.156 "name": "BaseBdev2", 00:18:50.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.156 "is_configured": false, 00:18:50.156 "data_offset": 0, 00:18:50.156 "data_size": 0 00:18:50.156 }, 00:18:50.156 { 00:18:50.156 "name": "BaseBdev3", 00:18:50.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.156 "is_configured": false, 00:18:50.156 "data_offset": 0, 00:18:50.156 "data_size": 0 00:18:50.156 }, 00:18:50.156 { 00:18:50.156 "name": "BaseBdev4", 00:18:50.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.156 "is_configured": false, 00:18:50.156 "data_offset": 0, 00:18:50.156 "data_size": 0 00:18:50.156 } 00:18:50.156 ] 00:18:50.156 }' 00:18:50.156 16:57:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:50.156 16:57:38 -- common/autotest_common.sh@10 -- # set +x 00:18:50.723 16:57:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:50.979 [2024-11-05 16:57:39.709819] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:50.979 BaseBdev2 00:18:50.979 16:57:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:50.979 16:57:39 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:50.979 16:57:39 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:50.979 16:57:39 -- common/autotest_common.sh@899 -- # local i 00:18:50.979 16:57:39 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:50.979 16:57:39 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:50.979 16:57:39 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:51.237 16:57:39 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:51.495 [ 00:18:51.495 { 00:18:51.495 "name": "BaseBdev2", 00:18:51.495 "aliases": [ 00:18:51.495 "559af06a-31d0-42c0-a464-a70d2f755f6d" 00:18:51.495 ], 00:18:51.495 "product_name": "Malloc disk", 00:18:51.495 "block_size": 512, 00:18:51.495 "num_blocks": 65536, 00:18:51.495 "uuid": "559af06a-31d0-42c0-a464-a70d2f755f6d", 00:18:51.495 "assigned_rate_limits": { 00:18:51.495 "rw_ios_per_sec": 0, 00:18:51.495 "rw_mbytes_per_sec": 0, 00:18:51.495 "r_mbytes_per_sec": 0, 00:18:51.495 "w_mbytes_per_sec": 0 00:18:51.495 }, 00:18:51.495 "claimed": true, 00:18:51.495 "claim_type": "exclusive_write", 00:18:51.495 "zoned": false, 00:18:51.495 "supported_io_types": { 00:18:51.495 "read": true, 00:18:51.495 "write": true, 00:18:51.495 "unmap": true, 00:18:51.495 "write_zeroes": true, 00:18:51.495 "flush": true, 00:18:51.495 "reset": true, 00:18:51.495 "compare": false, 00:18:51.495 "compare_and_write": false, 00:18:51.495 "abort": true, 00:18:51.495 "nvme_admin": false, 00:18:51.495 "nvme_io": false 00:18:51.495 }, 00:18:51.495 "memory_domains": [ 00:18:51.495 { 00:18:51.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.495 "dma_device_type": 2 00:18:51.495 } 00:18:51.495 ], 00:18:51.495 "driver_specific": {} 00:18:51.495 } 00:18:51.495 ] 00:18:51.495 16:57:40 -- common/autotest_common.sh@905 -- # return 0 00:18:51.495 16:57:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:51.495 16:57:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:51.495 16:57:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:51.495 16:57:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:51.495 16:57:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:51.495 16:57:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:51.495 16:57:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:51.495 16:57:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:51.495 16:57:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:51.495 16:57:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:51.495 16:57:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:51.495 16:57:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:51.495 16:57:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.495 16:57:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.753 16:57:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:51.753 "name": "Existed_Raid", 00:18:51.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.753 "strip_size_kb": 64, 00:18:51.753 "state": "configuring", 00:18:51.753 "raid_level": "concat", 00:18:51.753 "superblock": false, 00:18:51.753 "num_base_bdevs": 4, 00:18:51.753 "num_base_bdevs_discovered": 2, 00:18:51.753 "num_base_bdevs_operational": 4, 00:18:51.753 "base_bdevs_list": [ 00:18:51.753 { 00:18:51.753 "name": "BaseBdev1", 00:18:51.753 "uuid": "7b21d115-9490-4da6-8bb5-8c462a21d8e4", 00:18:51.753 "is_configured": true, 00:18:51.753 "data_offset": 0, 00:18:51.753 "data_size": 65536 00:18:51.753 }, 00:18:51.753 { 00:18:51.753 "name": "BaseBdev2", 00:18:51.753 "uuid": "559af06a-31d0-42c0-a464-a70d2f755f6d", 00:18:51.753 "is_configured": true, 00:18:51.753 "data_offset": 0, 00:18:51.753 "data_size": 65536 00:18:51.753 }, 00:18:51.753 { 00:18:51.753 "name": "BaseBdev3", 00:18:51.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.753 "is_configured": false, 00:18:51.753 "data_offset": 0, 00:18:51.753 "data_size": 0 00:18:51.753 }, 00:18:51.753 { 00:18:51.753 "name": "BaseBdev4", 00:18:51.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.753 "is_configured": false, 00:18:51.753 "data_offset": 0, 00:18:51.753 "data_size": 0 00:18:51.753 } 00:18:51.753 ] 00:18:51.753 }' 00:18:51.753 16:57:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:51.753 16:57:40 -- common/autotest_common.sh@10 -- # set +x 00:18:52.319 16:57:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:52.577 [2024-11-05 16:57:41.252100] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:52.577 BaseBdev3 00:18:52.577 16:57:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:52.577 16:57:41 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:52.577 16:57:41 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:52.577 16:57:41 -- common/autotest_common.sh@899 -- # local i 00:18:52.577 16:57:41 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:52.577 16:57:41 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:52.577 16:57:41 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:52.577 16:57:41 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:52.835 [ 00:18:52.835 { 00:18:52.835 "name": "BaseBdev3", 00:18:52.835 "aliases": [ 00:18:52.835 "422303ba-5881-436e-9bd1-b6d5e4c0eaa5" 00:18:52.835 ], 00:18:52.835 "product_name": "Malloc disk", 00:18:52.835 "block_size": 512, 00:18:52.835 "num_blocks": 65536, 00:18:52.835 "uuid": "422303ba-5881-436e-9bd1-b6d5e4c0eaa5", 00:18:52.835 "assigned_rate_limits": { 00:18:52.835 "rw_ios_per_sec": 0, 00:18:52.835 "rw_mbytes_per_sec": 0, 00:18:52.835 "r_mbytes_per_sec": 0, 00:18:52.835 "w_mbytes_per_sec": 0 00:18:52.835 }, 00:18:52.835 "claimed": true, 00:18:52.835 "claim_type": "exclusive_write", 00:18:52.835 "zoned": false, 00:18:52.835 "supported_io_types": { 00:18:52.835 "read": true, 00:18:52.835 "write": true, 00:18:52.835 "unmap": true, 00:18:52.835 "write_zeroes": true, 00:18:52.835 "flush": true, 00:18:52.835 "reset": true, 00:18:52.835 "compare": false, 00:18:52.835 "compare_and_write": false, 00:18:52.835 "abort": true, 00:18:52.835 "nvme_admin": false, 00:18:52.835 "nvme_io": false 00:18:52.835 }, 00:18:52.835 "memory_domains": [ 00:18:52.835 { 00:18:52.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.835 "dma_device_type": 2 00:18:52.835 } 00:18:52.835 ], 00:18:52.835 "driver_specific": {} 00:18:52.835 } 00:18:52.835 ] 00:18:52.835 16:57:41 -- common/autotest_common.sh@905 -- # return 0 00:18:52.835 16:57:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:52.835 16:57:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:52.835 16:57:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:52.835 16:57:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:52.835 16:57:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:52.835 16:57:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:52.835 16:57:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:52.835 16:57:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:52.835 16:57:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:52.835 16:57:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:52.835 16:57:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:52.835 16:57:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:52.835 16:57:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.835 16:57:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.093 16:57:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:53.093 "name": "Existed_Raid", 00:18:53.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.093 "strip_size_kb": 64, 00:18:53.093 "state": "configuring", 00:18:53.093 "raid_level": "concat", 00:18:53.093 "superblock": false, 00:18:53.093 "num_base_bdevs": 4, 00:18:53.093 "num_base_bdevs_discovered": 3, 00:18:53.093 "num_base_bdevs_operational": 4, 00:18:53.093 "base_bdevs_list": [ 00:18:53.093 { 00:18:53.093 "name": "BaseBdev1", 00:18:53.093 "uuid": "7b21d115-9490-4da6-8bb5-8c462a21d8e4", 00:18:53.093 "is_configured": true, 00:18:53.093 "data_offset": 0, 00:18:53.093 "data_size": 65536 00:18:53.093 }, 00:18:53.093 { 00:18:53.093 "name": "BaseBdev2", 00:18:53.093 "uuid": "559af06a-31d0-42c0-a464-a70d2f755f6d", 00:18:53.093 "is_configured": true, 00:18:53.093 "data_offset": 0, 00:18:53.093 "data_size": 65536 00:18:53.093 }, 00:18:53.093 { 00:18:53.093 "name": "BaseBdev3", 00:18:53.093 "uuid": "422303ba-5881-436e-9bd1-b6d5e4c0eaa5", 00:18:53.093 "is_configured": true, 00:18:53.093 "data_offset": 0, 00:18:53.093 "data_size": 65536 00:18:53.093 }, 00:18:53.093 { 00:18:53.093 "name": "BaseBdev4", 00:18:53.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.093 "is_configured": false, 00:18:53.093 "data_offset": 0, 00:18:53.093 "data_size": 0 00:18:53.093 } 00:18:53.093 ] 00:18:53.093 }' 00:18:53.093 16:57:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:53.093 16:57:41 -- common/autotest_common.sh@10 -- # set +x 00:18:53.660 16:57:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:53.918 [2024-11-05 16:57:42.772792] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:53.918 [2024-11-05 16:57:42.772868] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:18:53.918 [2024-11-05 16:57:42.772879] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:53.918 [2024-11-05 16:57:42.773029] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:53.918 [2024-11-05 16:57:42.773412] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:18:53.918 [2024-11-05 16:57:42.773437] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:18:53.918 [2024-11-05 16:57:42.773703] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.918 BaseBdev4 00:18:53.918 16:57:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:53.918 16:57:42 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:53.918 16:57:42 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:53.918 16:57:42 -- common/autotest_common.sh@899 -- # local i 00:18:53.918 16:57:42 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:53.918 16:57:42 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:53.918 16:57:42 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:54.176 16:57:42 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:54.434 [ 00:18:54.434 { 00:18:54.434 "name": "BaseBdev4", 00:18:54.434 "aliases": [ 00:18:54.434 "a66d490c-eaa5-4940-b268-9948b7992e40" 00:18:54.434 ], 00:18:54.434 "product_name": "Malloc disk", 00:18:54.434 "block_size": 512, 00:18:54.434 "num_blocks": 65536, 00:18:54.434 "uuid": "a66d490c-eaa5-4940-b268-9948b7992e40", 00:18:54.434 "assigned_rate_limits": { 00:18:54.434 "rw_ios_per_sec": 0, 00:18:54.434 "rw_mbytes_per_sec": 0, 00:18:54.434 "r_mbytes_per_sec": 0, 00:18:54.434 "w_mbytes_per_sec": 0 00:18:54.434 }, 00:18:54.434 "claimed": true, 00:18:54.434 "claim_type": "exclusive_write", 00:18:54.434 "zoned": false, 00:18:54.434 "supported_io_types": { 00:18:54.434 "read": true, 00:18:54.434 "write": true, 00:18:54.434 "unmap": true, 00:18:54.434 "write_zeroes": true, 00:18:54.434 "flush": true, 00:18:54.434 "reset": true, 00:18:54.434 "compare": false, 00:18:54.434 "compare_and_write": false, 00:18:54.434 "abort": true, 00:18:54.434 "nvme_admin": false, 00:18:54.434 "nvme_io": false 00:18:54.434 }, 00:18:54.434 "memory_domains": [ 00:18:54.434 { 00:18:54.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.434 "dma_device_type": 2 00:18:54.434 } 00:18:54.434 ], 00:18:54.434 "driver_specific": {} 00:18:54.434 } 00:18:54.434 ] 00:18:54.434 16:57:43 -- common/autotest_common.sh@905 -- # return 0 00:18:54.434 16:57:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:54.434 16:57:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:54.434 16:57:43 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:54.434 16:57:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:54.434 16:57:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:54.434 16:57:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:54.434 16:57:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:54.434 16:57:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:54.434 16:57:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:54.434 16:57:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:54.434 16:57:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:54.434 16:57:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:54.435 16:57:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.435 16:57:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.693 16:57:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:54.693 "name": "Existed_Raid", 00:18:54.693 "uuid": "21d71689-7c2d-45a6-9c55-40fc9a9e8322", 00:18:54.693 "strip_size_kb": 64, 00:18:54.693 "state": "online", 00:18:54.693 "raid_level": "concat", 00:18:54.693 "superblock": false, 00:18:54.693 "num_base_bdevs": 4, 00:18:54.693 "num_base_bdevs_discovered": 4, 00:18:54.693 "num_base_bdevs_operational": 4, 00:18:54.693 "base_bdevs_list": [ 00:18:54.693 { 00:18:54.693 "name": "BaseBdev1", 00:18:54.693 "uuid": "7b21d115-9490-4da6-8bb5-8c462a21d8e4", 00:18:54.693 "is_configured": true, 00:18:54.693 "data_offset": 0, 00:18:54.693 "data_size": 65536 00:18:54.693 }, 00:18:54.693 { 00:18:54.693 "name": "BaseBdev2", 00:18:54.693 "uuid": "559af06a-31d0-42c0-a464-a70d2f755f6d", 00:18:54.693 "is_configured": true, 00:18:54.693 "data_offset": 0, 00:18:54.693 "data_size": 65536 00:18:54.693 }, 00:18:54.693 { 00:18:54.693 "name": "BaseBdev3", 00:18:54.693 "uuid": "422303ba-5881-436e-9bd1-b6d5e4c0eaa5", 00:18:54.693 "is_configured": true, 00:18:54.693 "data_offset": 0, 00:18:54.693 "data_size": 65536 00:18:54.693 }, 00:18:54.693 { 00:18:54.693 "name": "BaseBdev4", 00:18:54.693 "uuid": "a66d490c-eaa5-4940-b268-9948b7992e40", 00:18:54.693 "is_configured": true, 00:18:54.693 "data_offset": 0, 00:18:54.693 "data_size": 65536 00:18:54.693 } 00:18:54.693 ] 00:18:54.693 }' 00:18:54.693 16:57:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:54.693 16:57:43 -- common/autotest_common.sh@10 -- # set +x 00:18:55.259 16:57:43 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:55.517 [2024-11-05 16:57:44.173155] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:55.517 [2024-11-05 16:57:44.173193] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.517 [2024-11-05 16:57:44.173301] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.517 16:57:44 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:55.517 16:57:44 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:55.517 16:57:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:55.517 16:57:44 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:55.517 16:57:44 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:55.517 16:57:44 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:55.517 16:57:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:55.517 16:57:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:55.517 16:57:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:55.517 16:57:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:55.517 16:57:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:55.517 16:57:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:55.517 16:57:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:55.517 16:57:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:55.517 16:57:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:55.517 16:57:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.518 16:57:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.775 16:57:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:55.775 "name": "Existed_Raid", 00:18:55.775 "uuid": "21d71689-7c2d-45a6-9c55-40fc9a9e8322", 00:18:55.775 "strip_size_kb": 64, 00:18:55.775 "state": "offline", 00:18:55.775 "raid_level": "concat", 00:18:55.776 "superblock": false, 00:18:55.776 "num_base_bdevs": 4, 00:18:55.776 "num_base_bdevs_discovered": 3, 00:18:55.776 "num_base_bdevs_operational": 3, 00:18:55.776 "base_bdevs_list": [ 00:18:55.776 { 00:18:55.776 "name": null, 00:18:55.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.776 "is_configured": false, 00:18:55.776 "data_offset": 0, 00:18:55.776 "data_size": 65536 00:18:55.776 }, 00:18:55.776 { 00:18:55.776 "name": "BaseBdev2", 00:18:55.776 "uuid": "559af06a-31d0-42c0-a464-a70d2f755f6d", 00:18:55.776 "is_configured": true, 00:18:55.776 "data_offset": 0, 00:18:55.776 "data_size": 65536 00:18:55.776 }, 00:18:55.776 { 00:18:55.776 "name": "BaseBdev3", 00:18:55.776 "uuid": "422303ba-5881-436e-9bd1-b6d5e4c0eaa5", 00:18:55.776 "is_configured": true, 00:18:55.776 "data_offset": 0, 00:18:55.776 "data_size": 65536 00:18:55.776 }, 00:18:55.776 { 00:18:55.776 "name": "BaseBdev4", 00:18:55.776 "uuid": "a66d490c-eaa5-4940-b268-9948b7992e40", 00:18:55.776 "is_configured": true, 00:18:55.776 "data_offset": 0, 00:18:55.776 "data_size": 65536 00:18:55.776 } 00:18:55.776 ] 00:18:55.776 }' 00:18:55.776 16:57:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:55.776 16:57:44 -- common/autotest_common.sh@10 -- # set +x 00:18:56.341 16:57:45 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:56.341 16:57:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:56.341 16:57:45 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.341 16:57:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:56.598 16:57:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:56.598 16:57:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:56.598 16:57:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:56.855 [2024-11-05 16:57:45.605425] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:56.855 16:57:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:56.855 16:57:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:56.855 16:57:45 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.855 16:57:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:57.112 16:57:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:57.112 16:57:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:57.112 16:57:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:57.370 [2024-11-05 16:57:46.069571] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:57.370 16:57:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:57.370 16:57:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:57.370 16:57:46 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.370 16:57:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:57.627 16:57:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:57.627 16:57:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:57.627 16:57:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:57.885 [2024-11-05 16:57:46.573442] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:57.885 [2024-11-05 16:57:46.573534] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:18:57.885 16:57:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:57.885 16:57:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:57.885 16:57:46 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.885 16:57:46 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:58.142 16:57:46 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:58.142 16:57:46 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:58.142 16:57:46 -- bdev/bdev_raid.sh@287 -- # killprocess 119662 00:18:58.142 16:57:46 -- common/autotest_common.sh@936 -- # '[' -z 119662 ']' 00:18:58.142 16:57:46 -- common/autotest_common.sh@940 -- # kill -0 119662 00:18:58.142 16:57:46 -- common/autotest_common.sh@941 -- # uname 00:18:58.142 16:57:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:58.142 16:57:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119662 00:18:58.143 16:57:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:58.143 16:57:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:58.143 16:57:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119662' 00:18:58.143 killing process with pid 119662 00:18:58.143 16:57:46 -- common/autotest_common.sh@955 -- # kill 119662 00:18:58.143 16:57:46 -- common/autotest_common.sh@960 -- # wait 119662 00:18:58.143 [2024-11-05 16:57:46.864488] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:58.143 [2024-11-05 16:57:46.864598] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:59.075 ************************************ 00:18:59.075 END TEST raid_state_function_test 00:18:59.075 ************************************ 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:59.075 00:18:59.075 real 0m13.657s 00:18:59.075 user 0m24.431s 00:18:59.075 sys 0m1.598s 00:18:59.075 16:57:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:59.075 16:57:47 -- common/autotest_common.sh@10 -- # set +x 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:18:59.075 16:57:47 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:59.075 16:57:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:59.075 16:57:47 -- common/autotest_common.sh@10 -- # set +x 00:18:59.075 ************************************ 00:18:59.075 START TEST raid_state_function_test_sb 00:18:59.075 ************************************ 00:18:59.075 16:57:47 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 true 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@226 -- # raid_pid=120087 00:18:59.075 Process raid pid: 120087 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120087' 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120087 /var/tmp/spdk-raid.sock 00:18:59.075 16:57:47 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:59.075 16:57:47 -- common/autotest_common.sh@829 -- # '[' -z 120087 ']' 00:18:59.075 16:57:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:59.075 16:57:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:59.075 16:57:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:59.075 16:57:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.075 16:57:47 -- common/autotest_common.sh@10 -- # set +x 00:18:59.076 [2024-11-05 16:57:47.940060] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:59.076 [2024-11-05 16:57:47.940277] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.333 [2024-11-05 16:57:48.111748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.591 [2024-11-05 16:57:48.325505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.849 [2024-11-05 16:57:48.500573] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.107 16:57:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:00.107 16:57:48 -- common/autotest_common.sh@862 -- # return 0 00:19:00.107 16:57:48 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:00.365 [2024-11-05 16:57:49.068180] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:00.365 [2024-11-05 16:57:49.068268] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:00.365 [2024-11-05 16:57:49.068295] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:00.365 [2024-11-05 16:57:49.068316] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:00.365 [2024-11-05 16:57:49.068323] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:00.365 [2024-11-05 16:57:49.068358] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:00.365 [2024-11-05 16:57:49.068366] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:00.365 [2024-11-05 16:57:49.068387] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:00.365 16:57:49 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:00.365 16:57:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:00.365 16:57:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:00.365 16:57:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:00.365 16:57:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:00.365 16:57:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:00.365 16:57:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:00.365 16:57:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:00.365 16:57:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:00.365 16:57:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:00.365 16:57:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.365 16:57:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.622 16:57:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:00.622 "name": "Existed_Raid", 00:19:00.623 "uuid": "01a5bdca-c169-4d25-bd64-7d3028728239", 00:19:00.623 "strip_size_kb": 64, 00:19:00.623 "state": "configuring", 00:19:00.623 "raid_level": "concat", 00:19:00.623 "superblock": true, 00:19:00.623 "num_base_bdevs": 4, 00:19:00.623 "num_base_bdevs_discovered": 0, 00:19:00.623 "num_base_bdevs_operational": 4, 00:19:00.623 "base_bdevs_list": [ 00:19:00.623 { 00:19:00.623 "name": "BaseBdev1", 00:19:00.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.623 "is_configured": false, 00:19:00.623 "data_offset": 0, 00:19:00.623 "data_size": 0 00:19:00.623 }, 00:19:00.623 { 00:19:00.623 "name": "BaseBdev2", 00:19:00.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.623 "is_configured": false, 00:19:00.623 "data_offset": 0, 00:19:00.623 "data_size": 0 00:19:00.623 }, 00:19:00.623 { 00:19:00.623 "name": "BaseBdev3", 00:19:00.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.623 "is_configured": false, 00:19:00.623 "data_offset": 0, 00:19:00.623 "data_size": 0 00:19:00.623 }, 00:19:00.623 { 00:19:00.623 "name": "BaseBdev4", 00:19:00.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.623 "is_configured": false, 00:19:00.623 "data_offset": 0, 00:19:00.623 "data_size": 0 00:19:00.623 } 00:19:00.623 ] 00:19:00.623 }' 00:19:00.623 16:57:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:00.623 16:57:49 -- common/autotest_common.sh@10 -- # set +x 00:19:01.200 16:57:49 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:01.474 [2024-11-05 16:57:50.212260] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:01.474 [2024-11-05 16:57:50.212307] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:19:01.474 16:57:50 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:01.731 [2024-11-05 16:57:50.464325] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:01.731 [2024-11-05 16:57:50.464384] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:01.731 [2024-11-05 16:57:50.464411] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:01.731 [2024-11-05 16:57:50.464438] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:01.731 [2024-11-05 16:57:50.464445] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:01.731 [2024-11-05 16:57:50.464478] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:01.731 [2024-11-05 16:57:50.464485] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:01.731 [2024-11-05 16:57:50.464506] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:01.731 16:57:50 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:01.989 [2024-11-05 16:57:50.690625] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:01.989 BaseBdev1 00:19:01.989 16:57:50 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:01.989 16:57:50 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:01.989 16:57:50 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:01.989 16:57:50 -- common/autotest_common.sh@899 -- # local i 00:19:01.989 16:57:50 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:01.989 16:57:50 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:01.989 16:57:50 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:02.247 16:57:50 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:02.247 [ 00:19:02.247 { 00:19:02.247 "name": "BaseBdev1", 00:19:02.247 "aliases": [ 00:19:02.247 "16e5c27b-af25-4d3f-a4d3-628a5ad4266a" 00:19:02.247 ], 00:19:02.247 "product_name": "Malloc disk", 00:19:02.247 "block_size": 512, 00:19:02.247 "num_blocks": 65536, 00:19:02.247 "uuid": "16e5c27b-af25-4d3f-a4d3-628a5ad4266a", 00:19:02.247 "assigned_rate_limits": { 00:19:02.247 "rw_ios_per_sec": 0, 00:19:02.247 "rw_mbytes_per_sec": 0, 00:19:02.247 "r_mbytes_per_sec": 0, 00:19:02.247 "w_mbytes_per_sec": 0 00:19:02.247 }, 00:19:02.247 "claimed": true, 00:19:02.247 "claim_type": "exclusive_write", 00:19:02.247 "zoned": false, 00:19:02.248 "supported_io_types": { 00:19:02.248 "read": true, 00:19:02.248 "write": true, 00:19:02.248 "unmap": true, 00:19:02.248 "write_zeroes": true, 00:19:02.248 "flush": true, 00:19:02.248 "reset": true, 00:19:02.248 "compare": false, 00:19:02.248 "compare_and_write": false, 00:19:02.248 "abort": true, 00:19:02.248 "nvme_admin": false, 00:19:02.248 "nvme_io": false 00:19:02.248 }, 00:19:02.248 "memory_domains": [ 00:19:02.248 { 00:19:02.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.248 "dma_device_type": 2 00:19:02.248 } 00:19:02.248 ], 00:19:02.248 "driver_specific": {} 00:19:02.248 } 00:19:02.248 ] 00:19:02.506 16:57:51 -- common/autotest_common.sh@905 -- # return 0 00:19:02.506 16:57:51 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:02.506 16:57:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:02.506 16:57:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:02.506 16:57:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:02.506 16:57:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:02.506 16:57:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:02.506 16:57:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.506 16:57:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.506 16:57:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.506 16:57:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.506 16:57:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.506 16:57:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.506 16:57:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:02.506 "name": "Existed_Raid", 00:19:02.506 "uuid": "36606787-3cea-46f2-950c-8a7f2a0489f7", 00:19:02.506 "strip_size_kb": 64, 00:19:02.506 "state": "configuring", 00:19:02.506 "raid_level": "concat", 00:19:02.506 "superblock": true, 00:19:02.506 "num_base_bdevs": 4, 00:19:02.506 "num_base_bdevs_discovered": 1, 00:19:02.506 "num_base_bdevs_operational": 4, 00:19:02.506 "base_bdevs_list": [ 00:19:02.506 { 00:19:02.506 "name": "BaseBdev1", 00:19:02.506 "uuid": "16e5c27b-af25-4d3f-a4d3-628a5ad4266a", 00:19:02.506 "is_configured": true, 00:19:02.506 "data_offset": 2048, 00:19:02.506 "data_size": 63488 00:19:02.506 }, 00:19:02.506 { 00:19:02.506 "name": "BaseBdev2", 00:19:02.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.506 "is_configured": false, 00:19:02.506 "data_offset": 0, 00:19:02.506 "data_size": 0 00:19:02.506 }, 00:19:02.506 { 00:19:02.506 "name": "BaseBdev3", 00:19:02.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.506 "is_configured": false, 00:19:02.506 "data_offset": 0, 00:19:02.506 "data_size": 0 00:19:02.506 }, 00:19:02.506 { 00:19:02.506 "name": "BaseBdev4", 00:19:02.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.506 "is_configured": false, 00:19:02.506 "data_offset": 0, 00:19:02.506 "data_size": 0 00:19:02.506 } 00:19:02.506 ] 00:19:02.506 }' 00:19:02.506 16:57:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:02.506 16:57:51 -- common/autotest_common.sh@10 -- # set +x 00:19:03.072 16:57:51 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:03.330 [2024-11-05 16:57:52.186918] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:03.330 [2024-11-05 16:57:52.186982] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:03.330 16:57:52 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:03.330 16:57:52 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:03.895 16:57:52 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:03.895 BaseBdev1 00:19:03.895 16:57:52 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:03.895 16:57:52 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:03.895 16:57:52 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:03.895 16:57:52 -- common/autotest_common.sh@899 -- # local i 00:19:03.895 16:57:52 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:03.895 16:57:52 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:03.895 16:57:52 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:04.153 16:57:52 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:04.411 [ 00:19:04.411 { 00:19:04.411 "name": "BaseBdev1", 00:19:04.411 "aliases": [ 00:19:04.411 "765a90ce-d9b4-4daa-a817-bc554f61d2af" 00:19:04.411 ], 00:19:04.411 "product_name": "Malloc disk", 00:19:04.411 "block_size": 512, 00:19:04.411 "num_blocks": 65536, 00:19:04.411 "uuid": "765a90ce-d9b4-4daa-a817-bc554f61d2af", 00:19:04.411 "assigned_rate_limits": { 00:19:04.411 "rw_ios_per_sec": 0, 00:19:04.411 "rw_mbytes_per_sec": 0, 00:19:04.411 "r_mbytes_per_sec": 0, 00:19:04.411 "w_mbytes_per_sec": 0 00:19:04.411 }, 00:19:04.411 "claimed": false, 00:19:04.411 "zoned": false, 00:19:04.411 "supported_io_types": { 00:19:04.411 "read": true, 00:19:04.411 "write": true, 00:19:04.411 "unmap": true, 00:19:04.411 "write_zeroes": true, 00:19:04.411 "flush": true, 00:19:04.411 "reset": true, 00:19:04.411 "compare": false, 00:19:04.411 "compare_and_write": false, 00:19:04.411 "abort": true, 00:19:04.411 "nvme_admin": false, 00:19:04.411 "nvme_io": false 00:19:04.411 }, 00:19:04.411 "memory_domains": [ 00:19:04.411 { 00:19:04.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.411 "dma_device_type": 2 00:19:04.411 } 00:19:04.411 ], 00:19:04.411 "driver_specific": {} 00:19:04.411 } 00:19:04.411 ] 00:19:04.411 16:57:53 -- common/autotest_common.sh@905 -- # return 0 00:19:04.411 16:57:53 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:04.670 [2024-11-05 16:57:53.321490] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:04.670 [2024-11-05 16:57:53.323463] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:04.670 [2024-11-05 16:57:53.323554] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:04.670 [2024-11-05 16:57:53.323581] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:04.670 [2024-11-05 16:57:53.323605] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:04.670 [2024-11-05 16:57:53.323613] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:04.670 [2024-11-05 16:57:53.323629] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:04.670 16:57:53 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:04.670 16:57:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:04.670 16:57:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:04.670 16:57:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:04.670 16:57:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:04.670 16:57:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:04.670 16:57:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:04.670 16:57:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:04.670 16:57:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:04.670 16:57:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:04.670 16:57:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:04.670 16:57:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:04.670 16:57:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.670 16:57:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.928 16:57:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:04.928 "name": "Existed_Raid", 00:19:04.928 "uuid": "08427ccc-71b9-4c38-99d7-0ecbd0ef120c", 00:19:04.928 "strip_size_kb": 64, 00:19:04.928 "state": "configuring", 00:19:04.928 "raid_level": "concat", 00:19:04.928 "superblock": true, 00:19:04.928 "num_base_bdevs": 4, 00:19:04.928 "num_base_bdevs_discovered": 1, 00:19:04.928 "num_base_bdevs_operational": 4, 00:19:04.928 "base_bdevs_list": [ 00:19:04.928 { 00:19:04.928 "name": "BaseBdev1", 00:19:04.928 "uuid": "765a90ce-d9b4-4daa-a817-bc554f61d2af", 00:19:04.928 "is_configured": true, 00:19:04.928 "data_offset": 2048, 00:19:04.928 "data_size": 63488 00:19:04.928 }, 00:19:04.928 { 00:19:04.928 "name": "BaseBdev2", 00:19:04.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.928 "is_configured": false, 00:19:04.928 "data_offset": 0, 00:19:04.928 "data_size": 0 00:19:04.928 }, 00:19:04.928 { 00:19:04.928 "name": "BaseBdev3", 00:19:04.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.928 "is_configured": false, 00:19:04.928 "data_offset": 0, 00:19:04.928 "data_size": 0 00:19:04.928 }, 00:19:04.928 { 00:19:04.928 "name": "BaseBdev4", 00:19:04.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.928 "is_configured": false, 00:19:04.928 "data_offset": 0, 00:19:04.928 "data_size": 0 00:19:04.928 } 00:19:04.928 ] 00:19:04.928 }' 00:19:04.928 16:57:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:04.928 16:57:53 -- common/autotest_common.sh@10 -- # set +x 00:19:05.494 16:57:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:05.752 [2024-11-05 16:57:54.429834] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:05.752 BaseBdev2 00:19:05.752 16:57:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:05.752 16:57:54 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:05.752 16:57:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:05.752 16:57:54 -- common/autotest_common.sh@899 -- # local i 00:19:05.752 16:57:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:05.752 16:57:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:05.752 16:57:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:06.010 16:57:54 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:06.010 [ 00:19:06.010 { 00:19:06.010 "name": "BaseBdev2", 00:19:06.010 "aliases": [ 00:19:06.010 "69585c7a-cbe3-4d27-894c-7de3bc315940" 00:19:06.010 ], 00:19:06.010 "product_name": "Malloc disk", 00:19:06.010 "block_size": 512, 00:19:06.010 "num_blocks": 65536, 00:19:06.010 "uuid": "69585c7a-cbe3-4d27-894c-7de3bc315940", 00:19:06.010 "assigned_rate_limits": { 00:19:06.010 "rw_ios_per_sec": 0, 00:19:06.010 "rw_mbytes_per_sec": 0, 00:19:06.010 "r_mbytes_per_sec": 0, 00:19:06.010 "w_mbytes_per_sec": 0 00:19:06.010 }, 00:19:06.010 "claimed": true, 00:19:06.010 "claim_type": "exclusive_write", 00:19:06.010 "zoned": false, 00:19:06.010 "supported_io_types": { 00:19:06.010 "read": true, 00:19:06.010 "write": true, 00:19:06.010 "unmap": true, 00:19:06.010 "write_zeroes": true, 00:19:06.010 "flush": true, 00:19:06.010 "reset": true, 00:19:06.010 "compare": false, 00:19:06.010 "compare_and_write": false, 00:19:06.010 "abort": true, 00:19:06.010 "nvme_admin": false, 00:19:06.010 "nvme_io": false 00:19:06.010 }, 00:19:06.010 "memory_domains": [ 00:19:06.010 { 00:19:06.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.010 "dma_device_type": 2 00:19:06.010 } 00:19:06.010 ], 00:19:06.010 "driver_specific": {} 00:19:06.010 } 00:19:06.010 ] 00:19:06.010 16:57:54 -- common/autotest_common.sh@905 -- # return 0 00:19:06.010 16:57:54 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:06.010 16:57:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:06.010 16:57:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:06.010 16:57:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:06.010 16:57:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:06.010 16:57:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:06.010 16:57:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:06.010 16:57:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:06.010 16:57:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:06.010 16:57:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:06.010 16:57:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:06.010 16:57:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:06.010 16:57:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.010 16:57:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.269 16:57:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:06.269 "name": "Existed_Raid", 00:19:06.269 "uuid": "08427ccc-71b9-4c38-99d7-0ecbd0ef120c", 00:19:06.269 "strip_size_kb": 64, 00:19:06.269 "state": "configuring", 00:19:06.269 "raid_level": "concat", 00:19:06.269 "superblock": true, 00:19:06.269 "num_base_bdevs": 4, 00:19:06.269 "num_base_bdevs_discovered": 2, 00:19:06.269 "num_base_bdevs_operational": 4, 00:19:06.269 "base_bdevs_list": [ 00:19:06.269 { 00:19:06.269 "name": "BaseBdev1", 00:19:06.269 "uuid": "765a90ce-d9b4-4daa-a817-bc554f61d2af", 00:19:06.269 "is_configured": true, 00:19:06.269 "data_offset": 2048, 00:19:06.269 "data_size": 63488 00:19:06.269 }, 00:19:06.269 { 00:19:06.269 "name": "BaseBdev2", 00:19:06.269 "uuid": "69585c7a-cbe3-4d27-894c-7de3bc315940", 00:19:06.269 "is_configured": true, 00:19:06.269 "data_offset": 2048, 00:19:06.269 "data_size": 63488 00:19:06.269 }, 00:19:06.269 { 00:19:06.269 "name": "BaseBdev3", 00:19:06.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.269 "is_configured": false, 00:19:06.269 "data_offset": 0, 00:19:06.269 "data_size": 0 00:19:06.269 }, 00:19:06.269 { 00:19:06.269 "name": "BaseBdev4", 00:19:06.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.269 "is_configured": false, 00:19:06.269 "data_offset": 0, 00:19:06.269 "data_size": 0 00:19:06.269 } 00:19:06.269 ] 00:19:06.269 }' 00:19:06.269 16:57:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:06.269 16:57:55 -- common/autotest_common.sh@10 -- # set +x 00:19:06.834 16:57:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:07.092 [2024-11-05 16:57:55.914025] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:07.092 BaseBdev3 00:19:07.092 16:57:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:07.092 16:57:55 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:07.092 16:57:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:07.092 16:57:55 -- common/autotest_common.sh@899 -- # local i 00:19:07.092 16:57:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:07.092 16:57:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:07.092 16:57:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:07.351 16:57:56 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:07.610 [ 00:19:07.610 { 00:19:07.610 "name": "BaseBdev3", 00:19:07.610 "aliases": [ 00:19:07.610 "caa41bcd-f156-4464-9859-63039e518403" 00:19:07.610 ], 00:19:07.610 "product_name": "Malloc disk", 00:19:07.610 "block_size": 512, 00:19:07.610 "num_blocks": 65536, 00:19:07.610 "uuid": "caa41bcd-f156-4464-9859-63039e518403", 00:19:07.610 "assigned_rate_limits": { 00:19:07.610 "rw_ios_per_sec": 0, 00:19:07.610 "rw_mbytes_per_sec": 0, 00:19:07.610 "r_mbytes_per_sec": 0, 00:19:07.610 "w_mbytes_per_sec": 0 00:19:07.610 }, 00:19:07.610 "claimed": true, 00:19:07.610 "claim_type": "exclusive_write", 00:19:07.610 "zoned": false, 00:19:07.610 "supported_io_types": { 00:19:07.610 "read": true, 00:19:07.610 "write": true, 00:19:07.610 "unmap": true, 00:19:07.610 "write_zeroes": true, 00:19:07.610 "flush": true, 00:19:07.610 "reset": true, 00:19:07.610 "compare": false, 00:19:07.610 "compare_and_write": false, 00:19:07.610 "abort": true, 00:19:07.610 "nvme_admin": false, 00:19:07.610 "nvme_io": false 00:19:07.610 }, 00:19:07.610 "memory_domains": [ 00:19:07.610 { 00:19:07.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.610 "dma_device_type": 2 00:19:07.610 } 00:19:07.610 ], 00:19:07.610 "driver_specific": {} 00:19:07.610 } 00:19:07.610 ] 00:19:07.610 16:57:56 -- common/autotest_common.sh@905 -- # return 0 00:19:07.610 16:57:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:07.610 16:57:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:07.610 16:57:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:07.610 16:57:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:07.610 16:57:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:07.610 16:57:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:07.610 16:57:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:07.610 16:57:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:07.610 16:57:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:07.610 16:57:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:07.610 16:57:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:07.610 16:57:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:07.610 16:57:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.610 16:57:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.899 16:57:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:07.899 "name": "Existed_Raid", 00:19:07.899 "uuid": "08427ccc-71b9-4c38-99d7-0ecbd0ef120c", 00:19:07.899 "strip_size_kb": 64, 00:19:07.899 "state": "configuring", 00:19:07.899 "raid_level": "concat", 00:19:07.899 "superblock": true, 00:19:07.899 "num_base_bdevs": 4, 00:19:07.899 "num_base_bdevs_discovered": 3, 00:19:07.899 "num_base_bdevs_operational": 4, 00:19:07.899 "base_bdevs_list": [ 00:19:07.899 { 00:19:07.899 "name": "BaseBdev1", 00:19:07.899 "uuid": "765a90ce-d9b4-4daa-a817-bc554f61d2af", 00:19:07.899 "is_configured": true, 00:19:07.899 "data_offset": 2048, 00:19:07.899 "data_size": 63488 00:19:07.899 }, 00:19:07.899 { 00:19:07.899 "name": "BaseBdev2", 00:19:07.899 "uuid": "69585c7a-cbe3-4d27-894c-7de3bc315940", 00:19:07.899 "is_configured": true, 00:19:07.899 "data_offset": 2048, 00:19:07.899 "data_size": 63488 00:19:07.899 }, 00:19:07.899 { 00:19:07.899 "name": "BaseBdev3", 00:19:07.899 "uuid": "caa41bcd-f156-4464-9859-63039e518403", 00:19:07.899 "is_configured": true, 00:19:07.899 "data_offset": 2048, 00:19:07.899 "data_size": 63488 00:19:07.899 }, 00:19:07.899 { 00:19:07.899 "name": "BaseBdev4", 00:19:07.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.899 "is_configured": false, 00:19:07.899 "data_offset": 0, 00:19:07.899 "data_size": 0 00:19:07.899 } 00:19:07.899 ] 00:19:07.899 }' 00:19:07.899 16:57:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:07.899 16:57:56 -- common/autotest_common.sh@10 -- # set +x 00:19:08.466 16:57:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:08.725 [2024-11-05 16:57:57.442779] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:08.725 [2024-11-05 16:57:57.443094] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:19:08.725 [2024-11-05 16:57:57.443110] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:08.725 [2024-11-05 16:57:57.443254] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:19:08.725 [2024-11-05 16:57:57.443636] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:19:08.725 [2024-11-05 16:57:57.443662] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:19:08.725 [2024-11-05 16:57:57.443835] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.725 BaseBdev4 00:19:08.725 16:57:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:08.725 16:57:57 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:19:08.725 16:57:57 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:08.725 16:57:57 -- common/autotest_common.sh@899 -- # local i 00:19:08.725 16:57:57 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:08.725 16:57:57 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:08.725 16:57:57 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:08.984 16:57:57 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:08.984 [ 00:19:08.984 { 00:19:08.984 "name": "BaseBdev4", 00:19:08.984 "aliases": [ 00:19:08.984 "387635e9-26f9-460e-b2b9-b294ef490ec1" 00:19:08.984 ], 00:19:08.984 "product_name": "Malloc disk", 00:19:08.984 "block_size": 512, 00:19:08.984 "num_blocks": 65536, 00:19:08.984 "uuid": "387635e9-26f9-460e-b2b9-b294ef490ec1", 00:19:08.984 "assigned_rate_limits": { 00:19:08.984 "rw_ios_per_sec": 0, 00:19:08.984 "rw_mbytes_per_sec": 0, 00:19:08.984 "r_mbytes_per_sec": 0, 00:19:08.984 "w_mbytes_per_sec": 0 00:19:08.984 }, 00:19:08.984 "claimed": true, 00:19:08.984 "claim_type": "exclusive_write", 00:19:08.984 "zoned": false, 00:19:08.984 "supported_io_types": { 00:19:08.984 "read": true, 00:19:08.984 "write": true, 00:19:08.984 "unmap": true, 00:19:08.984 "write_zeroes": true, 00:19:08.984 "flush": true, 00:19:08.984 "reset": true, 00:19:08.984 "compare": false, 00:19:08.984 "compare_and_write": false, 00:19:08.984 "abort": true, 00:19:08.984 "nvme_admin": false, 00:19:08.984 "nvme_io": false 00:19:08.984 }, 00:19:08.984 "memory_domains": [ 00:19:08.984 { 00:19:08.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.984 "dma_device_type": 2 00:19:08.984 } 00:19:08.984 ], 00:19:08.984 "driver_specific": {} 00:19:08.984 } 00:19:08.984 ] 00:19:08.984 16:57:57 -- common/autotest_common.sh@905 -- # return 0 00:19:08.984 16:57:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:08.984 16:57:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:08.984 16:57:57 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:08.984 16:57:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:08.984 16:57:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:08.984 16:57:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:08.984 16:57:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:08.984 16:57:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:08.984 16:57:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:08.984 16:57:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:08.984 16:57:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:08.984 16:57:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:08.984 16:57:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.984 16:57:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.243 16:57:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:09.243 "name": "Existed_Raid", 00:19:09.243 "uuid": "08427ccc-71b9-4c38-99d7-0ecbd0ef120c", 00:19:09.243 "strip_size_kb": 64, 00:19:09.243 "state": "online", 00:19:09.243 "raid_level": "concat", 00:19:09.243 "superblock": true, 00:19:09.243 "num_base_bdevs": 4, 00:19:09.243 "num_base_bdevs_discovered": 4, 00:19:09.243 "num_base_bdevs_operational": 4, 00:19:09.243 "base_bdevs_list": [ 00:19:09.243 { 00:19:09.243 "name": "BaseBdev1", 00:19:09.243 "uuid": "765a90ce-d9b4-4daa-a817-bc554f61d2af", 00:19:09.243 "is_configured": true, 00:19:09.243 "data_offset": 2048, 00:19:09.243 "data_size": 63488 00:19:09.243 }, 00:19:09.243 { 00:19:09.243 "name": "BaseBdev2", 00:19:09.243 "uuid": "69585c7a-cbe3-4d27-894c-7de3bc315940", 00:19:09.243 "is_configured": true, 00:19:09.243 "data_offset": 2048, 00:19:09.243 "data_size": 63488 00:19:09.243 }, 00:19:09.243 { 00:19:09.243 "name": "BaseBdev3", 00:19:09.243 "uuid": "caa41bcd-f156-4464-9859-63039e518403", 00:19:09.243 "is_configured": true, 00:19:09.243 "data_offset": 2048, 00:19:09.243 "data_size": 63488 00:19:09.243 }, 00:19:09.243 { 00:19:09.243 "name": "BaseBdev4", 00:19:09.243 "uuid": "387635e9-26f9-460e-b2b9-b294ef490ec1", 00:19:09.243 "is_configured": true, 00:19:09.243 "data_offset": 2048, 00:19:09.243 "data_size": 63488 00:19:09.243 } 00:19:09.243 ] 00:19:09.243 }' 00:19:09.243 16:57:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:09.243 16:57:58 -- common/autotest_common.sh@10 -- # set +x 00:19:09.810 16:57:58 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:10.070 [2024-11-05 16:57:58.919158] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:10.070 [2024-11-05 16:57:58.919192] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:10.070 [2024-11-05 16:57:58.919251] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.328 16:57:58 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:10.328 16:57:58 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:10.328 16:57:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:10.328 16:57:58 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:10.328 16:57:58 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:10.328 16:57:58 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:10.328 16:57:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:10.328 16:57:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:10.328 16:57:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:10.328 16:57:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:10.328 16:57:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:10.328 16:57:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:10.328 16:57:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:10.328 16:57:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:10.328 16:57:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:10.328 16:57:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.328 16:57:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.328 16:57:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:10.328 "name": "Existed_Raid", 00:19:10.328 "uuid": "08427ccc-71b9-4c38-99d7-0ecbd0ef120c", 00:19:10.328 "strip_size_kb": 64, 00:19:10.328 "state": "offline", 00:19:10.328 "raid_level": "concat", 00:19:10.328 "superblock": true, 00:19:10.328 "num_base_bdevs": 4, 00:19:10.328 "num_base_bdevs_discovered": 3, 00:19:10.328 "num_base_bdevs_operational": 3, 00:19:10.328 "base_bdevs_list": [ 00:19:10.328 { 00:19:10.328 "name": null, 00:19:10.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.328 "is_configured": false, 00:19:10.328 "data_offset": 2048, 00:19:10.328 "data_size": 63488 00:19:10.328 }, 00:19:10.328 { 00:19:10.328 "name": "BaseBdev2", 00:19:10.328 "uuid": "69585c7a-cbe3-4d27-894c-7de3bc315940", 00:19:10.328 "is_configured": true, 00:19:10.328 "data_offset": 2048, 00:19:10.328 "data_size": 63488 00:19:10.328 }, 00:19:10.328 { 00:19:10.328 "name": "BaseBdev3", 00:19:10.328 "uuid": "caa41bcd-f156-4464-9859-63039e518403", 00:19:10.328 "is_configured": true, 00:19:10.328 "data_offset": 2048, 00:19:10.328 "data_size": 63488 00:19:10.328 }, 00:19:10.328 { 00:19:10.328 "name": "BaseBdev4", 00:19:10.328 "uuid": "387635e9-26f9-460e-b2b9-b294ef490ec1", 00:19:10.328 "is_configured": true, 00:19:10.329 "data_offset": 2048, 00:19:10.329 "data_size": 63488 00:19:10.329 } 00:19:10.329 ] 00:19:10.329 }' 00:19:10.329 16:57:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:10.329 16:57:59 -- common/autotest_common.sh@10 -- # set +x 00:19:11.264 16:57:59 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:11.264 16:57:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:11.264 16:57:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.264 16:57:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:11.264 16:58:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:11.264 16:58:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:11.264 16:58:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:11.522 [2024-11-05 16:58:00.194149] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:11.522 16:58:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:11.522 16:58:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:11.522 16:58:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:11.522 16:58:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.780 16:58:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:11.780 16:58:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:11.780 16:58:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:12.039 [2024-11-05 16:58:00.715301] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:12.039 16:58:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:12.039 16:58:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:12.039 16:58:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.039 16:58:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:12.297 16:58:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:12.297 16:58:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:12.297 16:58:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:12.297 [2024-11-05 16:58:01.184503] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:12.297 [2024-11-05 16:58:01.184578] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:19:12.555 16:58:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:12.555 16:58:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:12.555 16:58:01 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.555 16:58:01 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:12.813 16:58:01 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:12.813 16:58:01 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:12.813 16:58:01 -- bdev/bdev_raid.sh@287 -- # killprocess 120087 00:19:12.813 16:58:01 -- common/autotest_common.sh@936 -- # '[' -z 120087 ']' 00:19:12.813 16:58:01 -- common/autotest_common.sh@940 -- # kill -0 120087 00:19:12.813 16:58:01 -- common/autotest_common.sh@941 -- # uname 00:19:12.813 16:58:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:12.813 16:58:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120087 00:19:12.813 16:58:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:12.813 16:58:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:12.813 16:58:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120087' 00:19:12.813 killing process with pid 120087 00:19:12.813 16:58:01 -- common/autotest_common.sh@955 -- # kill 120087 00:19:12.813 [2024-11-05 16:58:01.484126] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:12.813 [2024-11-05 16:58:01.484234] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:12.813 16:58:01 -- common/autotest_common.sh@960 -- # wait 120087 00:19:13.779 ************************************ 00:19:13.779 END TEST raid_state_function_test_sb 00:19:13.779 ************************************ 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:13.779 00:19:13.779 real 0m14.538s 00:19:13.779 user 0m26.016s 00:19:13.779 sys 0m1.702s 00:19:13.779 16:58:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:13.779 16:58:02 -- common/autotest_common.sh@10 -- # set +x 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:19:13.779 16:58:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:13.779 16:58:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:13.779 16:58:02 -- common/autotest_common.sh@10 -- # set +x 00:19:13.779 ************************************ 00:19:13.779 START TEST raid_superblock_test 00:19:13.779 ************************************ 00:19:13.779 16:58:02 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 4 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@357 -- # raid_pid=120535 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:13.779 16:58:02 -- bdev/bdev_raid.sh@358 -- # waitforlisten 120535 /var/tmp/spdk-raid.sock 00:19:13.779 16:58:02 -- common/autotest_common.sh@829 -- # '[' -z 120535 ']' 00:19:13.779 16:58:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:13.779 16:58:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:13.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:13.779 16:58:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:13.779 16:58:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:13.779 16:58:02 -- common/autotest_common.sh@10 -- # set +x 00:19:13.779 [2024-11-05 16:58:02.537897] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:13.779 [2024-11-05 16:58:02.538134] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120535 ] 00:19:14.038 [2024-11-05 16:58:02.705846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.038 [2024-11-05 16:58:02.894200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.296 [2024-11-05 16:58:03.062027] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:14.554 16:58:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:14.554 16:58:03 -- common/autotest_common.sh@862 -- # return 0 00:19:14.554 16:58:03 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:14.554 16:58:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:14.554 16:58:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:14.554 16:58:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:14.554 16:58:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:14.554 16:58:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:14.554 16:58:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:14.554 16:58:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:14.554 16:58:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:14.813 malloc1 00:19:14.813 16:58:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:15.071 [2024-11-05 16:58:03.838031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:15.071 [2024-11-05 16:58:03.838155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.071 [2024-11-05 16:58:03.838188] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:19:15.071 [2024-11-05 16:58:03.838236] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.071 [2024-11-05 16:58:03.841022] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.071 [2024-11-05 16:58:03.841092] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:15.071 pt1 00:19:15.071 16:58:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:15.071 16:58:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:15.071 16:58:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:15.071 16:58:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:15.071 16:58:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:15.071 16:58:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:15.071 16:58:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:15.071 16:58:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:15.071 16:58:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:15.330 malloc2 00:19:15.330 16:58:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:15.588 [2024-11-05 16:58:04.329677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:15.588 [2024-11-05 16:58:04.329782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.588 [2024-11-05 16:58:04.329828] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:15.588 [2024-11-05 16:58:04.329878] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.588 [2024-11-05 16:58:04.332159] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.588 [2024-11-05 16:58:04.332224] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:15.588 pt2 00:19:15.588 16:58:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:15.588 16:58:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:15.588 16:58:04 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:15.588 16:58:04 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:15.588 16:58:04 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:15.588 16:58:04 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:15.588 16:58:04 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:15.588 16:58:04 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:15.588 16:58:04 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:15.847 malloc3 00:19:15.847 16:58:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:16.105 [2024-11-05 16:58:04.803978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:16.105 [2024-11-05 16:58:04.804081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.105 [2024-11-05 16:58:04.804141] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:19:16.105 [2024-11-05 16:58:04.804184] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.105 [2024-11-05 16:58:04.806430] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.105 [2024-11-05 16:58:04.806498] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:16.105 pt3 00:19:16.105 16:58:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:16.105 16:58:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:16.105 16:58:04 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:16.105 16:58:04 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:16.105 16:58:04 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:16.105 16:58:04 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:16.105 16:58:04 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:16.105 16:58:04 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:16.105 16:58:04 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:16.363 malloc4 00:19:16.363 16:58:05 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:16.363 [2024-11-05 16:58:05.230010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:16.363 [2024-11-05 16:58:05.230122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.363 [2024-11-05 16:58:05.230160] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:16.363 [2024-11-05 16:58:05.230203] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.363 [2024-11-05 16:58:05.232607] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.363 [2024-11-05 16:58:05.232678] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:16.363 pt4 00:19:16.363 16:58:05 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:16.363 16:58:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:16.363 16:58:05 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:16.622 [2024-11-05 16:58:05.422132] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:16.622 [2024-11-05 16:58:05.424293] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:16.622 [2024-11-05 16:58:05.424386] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:16.622 [2024-11-05 16:58:05.424501] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:16.622 [2024-11-05 16:58:05.424753] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:19:16.622 [2024-11-05 16:58:05.424777] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:16.622 [2024-11-05 16:58:05.424894] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:19:16.622 [2024-11-05 16:58:05.425273] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:19:16.622 [2024-11-05 16:58:05.425312] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:19:16.622 [2024-11-05 16:58:05.425477] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.622 16:58:05 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:16.622 16:58:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:16.622 16:58:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:16.622 16:58:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:16.622 16:58:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:16.622 16:58:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:16.622 16:58:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:16.622 16:58:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:16.622 16:58:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:16.622 16:58:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:16.622 16:58:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.622 16:58:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.881 16:58:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:16.881 "name": "raid_bdev1", 00:19:16.881 "uuid": "940bef80-4080-4ee4-8990-4823b5b6c5c8", 00:19:16.881 "strip_size_kb": 64, 00:19:16.881 "state": "online", 00:19:16.881 "raid_level": "concat", 00:19:16.881 "superblock": true, 00:19:16.881 "num_base_bdevs": 4, 00:19:16.881 "num_base_bdevs_discovered": 4, 00:19:16.881 "num_base_bdevs_operational": 4, 00:19:16.881 "base_bdevs_list": [ 00:19:16.881 { 00:19:16.881 "name": "pt1", 00:19:16.881 "uuid": "b9d6b50c-9a69-565d-a924-efb7a60d7cd3", 00:19:16.881 "is_configured": true, 00:19:16.881 "data_offset": 2048, 00:19:16.881 "data_size": 63488 00:19:16.881 }, 00:19:16.881 { 00:19:16.881 "name": "pt2", 00:19:16.881 "uuid": "01b2a8ac-0a07-5162-aa63-f4f9f76617c1", 00:19:16.881 "is_configured": true, 00:19:16.881 "data_offset": 2048, 00:19:16.881 "data_size": 63488 00:19:16.881 }, 00:19:16.881 { 00:19:16.881 "name": "pt3", 00:19:16.881 "uuid": "0b07151a-3020-5b0d-9359-46ea31477a4b", 00:19:16.881 "is_configured": true, 00:19:16.881 "data_offset": 2048, 00:19:16.881 "data_size": 63488 00:19:16.881 }, 00:19:16.881 { 00:19:16.881 "name": "pt4", 00:19:16.881 "uuid": "fdd5fb66-080d-52d8-904a-6081544309b0", 00:19:16.881 "is_configured": true, 00:19:16.881 "data_offset": 2048, 00:19:16.881 "data_size": 63488 00:19:16.881 } 00:19:16.881 ] 00:19:16.881 }' 00:19:16.881 16:58:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:16.881 16:58:05 -- common/autotest_common.sh@10 -- # set +x 00:19:17.447 16:58:06 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:17.448 16:58:06 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:17.706 [2024-11-05 16:58:06.502415] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:17.706 16:58:06 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=940bef80-4080-4ee4-8990-4823b5b6c5c8 00:19:17.706 16:58:06 -- bdev/bdev_raid.sh@380 -- # '[' -z 940bef80-4080-4ee4-8990-4823b5b6c5c8 ']' 00:19:17.706 16:58:06 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:17.964 [2024-11-05 16:58:06.698212] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:17.964 [2024-11-05 16:58:06.698406] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:17.964 [2024-11-05 16:58:06.698582] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.964 [2024-11-05 16:58:06.698750] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:17.964 [2024-11-05 16:58:06.698850] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:19:17.964 16:58:06 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.964 16:58:06 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:18.223 16:58:06 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:18.223 16:58:06 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:18.223 16:58:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:18.223 16:58:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:18.223 16:58:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:18.223 16:58:07 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:18.481 16:58:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:18.481 16:58:07 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:18.739 16:58:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:18.739 16:58:07 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:18.997 16:58:07 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:18.997 16:58:07 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:19.256 16:58:07 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:19.256 16:58:07 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:19.256 16:58:07 -- common/autotest_common.sh@650 -- # local es=0 00:19:19.256 16:58:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:19.256 16:58:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.256 16:58:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:19.256 16:58:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.256 16:58:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:19.256 16:58:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.256 16:58:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:19.256 16:58:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.256 16:58:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:19.256 16:58:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:19.514 [2024-11-05 16:58:08.178415] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:19.514 [2024-11-05 16:58:08.180517] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:19.514 [2024-11-05 16:58:08.180717] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:19.514 [2024-11-05 16:58:08.180885] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:19.514 [2024-11-05 16:58:08.181049] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:19.514 [2024-11-05 16:58:08.181243] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:19.515 [2024-11-05 16:58:08.181380] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:19.515 [2024-11-05 16:58:08.181537] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:19.515 [2024-11-05 16:58:08.181662] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:19.515 [2024-11-05 16:58:08.181756] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:19:19.515 request: 00:19:19.515 { 00:19:19.515 "name": "raid_bdev1", 00:19:19.515 "raid_level": "concat", 00:19:19.515 "base_bdevs": [ 00:19:19.515 "malloc1", 00:19:19.515 "malloc2", 00:19:19.515 "malloc3", 00:19:19.515 "malloc4" 00:19:19.515 ], 00:19:19.515 "superblock": false, 00:19:19.515 "strip_size_kb": 64, 00:19:19.515 "method": "bdev_raid_create", 00:19:19.515 "req_id": 1 00:19:19.515 } 00:19:19.515 Got JSON-RPC error response 00:19:19.515 response: 00:19:19.515 { 00:19:19.515 "code": -17, 00:19:19.515 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:19.515 } 00:19:19.515 16:58:08 -- common/autotest_common.sh@653 -- # es=1 00:19:19.515 16:58:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:19.515 16:58:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:19.515 16:58:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:19.515 16:58:08 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.515 16:58:08 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:19.773 16:58:08 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:19.773 16:58:08 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:19.773 16:58:08 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:19.773 [2024-11-05 16:58:08.626454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:19.773 [2024-11-05 16:58:08.626723] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.773 [2024-11-05 16:58:08.626943] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:19.773 [2024-11-05 16:58:08.627091] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.773 [2024-11-05 16:58:08.629608] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.773 [2024-11-05 16:58:08.629808] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:19.773 [2024-11-05 16:58:08.630057] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:19.773 [2024-11-05 16:58:08.630227] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:19.773 pt1 00:19:19.773 16:58:08 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:19.773 16:58:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:19.773 16:58:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:19.773 16:58:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:19.773 16:58:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:19.773 16:58:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:19.773 16:58:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:19.773 16:58:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:19.773 16:58:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:19.773 16:58:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:19.773 16:58:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.773 16:58:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.032 16:58:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:20.032 "name": "raid_bdev1", 00:19:20.032 "uuid": "940bef80-4080-4ee4-8990-4823b5b6c5c8", 00:19:20.032 "strip_size_kb": 64, 00:19:20.032 "state": "configuring", 00:19:20.032 "raid_level": "concat", 00:19:20.032 "superblock": true, 00:19:20.032 "num_base_bdevs": 4, 00:19:20.032 "num_base_bdevs_discovered": 1, 00:19:20.032 "num_base_bdevs_operational": 4, 00:19:20.032 "base_bdevs_list": [ 00:19:20.032 { 00:19:20.032 "name": "pt1", 00:19:20.032 "uuid": "b9d6b50c-9a69-565d-a924-efb7a60d7cd3", 00:19:20.032 "is_configured": true, 00:19:20.032 "data_offset": 2048, 00:19:20.032 "data_size": 63488 00:19:20.032 }, 00:19:20.032 { 00:19:20.032 "name": null, 00:19:20.032 "uuid": "01b2a8ac-0a07-5162-aa63-f4f9f76617c1", 00:19:20.032 "is_configured": false, 00:19:20.032 "data_offset": 2048, 00:19:20.032 "data_size": 63488 00:19:20.032 }, 00:19:20.032 { 00:19:20.032 "name": null, 00:19:20.032 "uuid": "0b07151a-3020-5b0d-9359-46ea31477a4b", 00:19:20.032 "is_configured": false, 00:19:20.032 "data_offset": 2048, 00:19:20.032 "data_size": 63488 00:19:20.032 }, 00:19:20.032 { 00:19:20.032 "name": null, 00:19:20.032 "uuid": "fdd5fb66-080d-52d8-904a-6081544309b0", 00:19:20.032 "is_configured": false, 00:19:20.032 "data_offset": 2048, 00:19:20.032 "data_size": 63488 00:19:20.032 } 00:19:20.032 ] 00:19:20.032 }' 00:19:20.032 16:58:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:20.032 16:58:08 -- common/autotest_common.sh@10 -- # set +x 00:19:20.599 16:58:09 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:20.599 16:58:09 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:20.857 [2024-11-05 16:58:09.702680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:20.857 [2024-11-05 16:58:09.702978] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.857 [2024-11-05 16:58:09.703134] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:20.858 [2024-11-05 16:58:09.703259] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.858 [2024-11-05 16:58:09.703791] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.858 [2024-11-05 16:58:09.704017] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:20.858 [2024-11-05 16:58:09.704232] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:20.858 [2024-11-05 16:58:09.704361] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:20.858 pt2 00:19:20.858 16:58:09 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:21.115 [2024-11-05 16:58:09.942724] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:21.115 16:58:09 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:21.115 16:58:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:21.115 16:58:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:21.115 16:58:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:21.115 16:58:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:21.115 16:58:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:21.115 16:58:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:21.115 16:58:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:21.115 16:58:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:21.115 16:58:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:21.115 16:58:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.115 16:58:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.384 16:58:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:21.384 "name": "raid_bdev1", 00:19:21.384 "uuid": "940bef80-4080-4ee4-8990-4823b5b6c5c8", 00:19:21.384 "strip_size_kb": 64, 00:19:21.384 "state": "configuring", 00:19:21.384 "raid_level": "concat", 00:19:21.384 "superblock": true, 00:19:21.384 "num_base_bdevs": 4, 00:19:21.384 "num_base_bdevs_discovered": 1, 00:19:21.384 "num_base_bdevs_operational": 4, 00:19:21.384 "base_bdevs_list": [ 00:19:21.384 { 00:19:21.384 "name": "pt1", 00:19:21.384 "uuid": "b9d6b50c-9a69-565d-a924-efb7a60d7cd3", 00:19:21.384 "is_configured": true, 00:19:21.384 "data_offset": 2048, 00:19:21.384 "data_size": 63488 00:19:21.384 }, 00:19:21.384 { 00:19:21.384 "name": null, 00:19:21.384 "uuid": "01b2a8ac-0a07-5162-aa63-f4f9f76617c1", 00:19:21.384 "is_configured": false, 00:19:21.384 "data_offset": 2048, 00:19:21.384 "data_size": 63488 00:19:21.384 }, 00:19:21.384 { 00:19:21.384 "name": null, 00:19:21.384 "uuid": "0b07151a-3020-5b0d-9359-46ea31477a4b", 00:19:21.384 "is_configured": false, 00:19:21.384 "data_offset": 2048, 00:19:21.384 "data_size": 63488 00:19:21.384 }, 00:19:21.384 { 00:19:21.384 "name": null, 00:19:21.384 "uuid": "fdd5fb66-080d-52d8-904a-6081544309b0", 00:19:21.384 "is_configured": false, 00:19:21.384 "data_offset": 2048, 00:19:21.384 "data_size": 63488 00:19:21.384 } 00:19:21.384 ] 00:19:21.384 }' 00:19:21.384 16:58:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:21.384 16:58:10 -- common/autotest_common.sh@10 -- # set +x 00:19:21.980 16:58:10 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:21.980 16:58:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:21.980 16:58:10 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:22.239 [2024-11-05 16:58:11.062928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:22.239 [2024-11-05 16:58:11.063202] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.239 [2024-11-05 16:58:11.063367] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:22.239 [2024-11-05 16:58:11.063501] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.239 [2024-11-05 16:58:11.064102] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.239 [2024-11-05 16:58:11.064317] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:22.239 [2024-11-05 16:58:11.064522] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:22.239 [2024-11-05 16:58:11.064649] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:22.239 pt2 00:19:22.239 16:58:11 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:22.239 16:58:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:22.239 16:58:11 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:22.497 [2024-11-05 16:58:11.250910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:22.497 [2024-11-05 16:58:11.251138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.497 [2024-11-05 16:58:11.251203] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:22.497 [2024-11-05 16:58:11.251454] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.497 [2024-11-05 16:58:11.251910] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.497 [2024-11-05 16:58:11.252099] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:22.497 [2024-11-05 16:58:11.252307] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:22.497 [2024-11-05 16:58:11.252424] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:22.497 pt3 00:19:22.497 16:58:11 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:22.497 16:58:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:22.497 16:58:11 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:22.757 [2024-11-05 16:58:11.494967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:22.757 [2024-11-05 16:58:11.495213] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.757 [2024-11-05 16:58:11.495289] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:22.757 [2024-11-05 16:58:11.495517] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.757 [2024-11-05 16:58:11.495995] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.757 [2024-11-05 16:58:11.496193] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:22.757 [2024-11-05 16:58:11.496414] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:22.757 [2024-11-05 16:58:11.496548] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:22.757 [2024-11-05 16:58:11.496727] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:19:22.757 [2024-11-05 16:58:11.496845] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:22.757 [2024-11-05 16:58:11.496979] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:22.757 [2024-11-05 16:58:11.497423] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:19:22.757 [2024-11-05 16:58:11.497553] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:19:22.757 [2024-11-05 16:58:11.497773] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.757 pt4 00:19:22.757 16:58:11 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:22.757 16:58:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:22.757 16:58:11 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:22.757 16:58:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:22.757 16:58:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:22.757 16:58:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:22.757 16:58:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:22.757 16:58:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:22.757 16:58:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:22.757 16:58:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:22.757 16:58:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:22.757 16:58:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:22.757 16:58:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.757 16:58:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.016 16:58:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:23.016 "name": "raid_bdev1", 00:19:23.016 "uuid": "940bef80-4080-4ee4-8990-4823b5b6c5c8", 00:19:23.016 "strip_size_kb": 64, 00:19:23.016 "state": "online", 00:19:23.016 "raid_level": "concat", 00:19:23.016 "superblock": true, 00:19:23.016 "num_base_bdevs": 4, 00:19:23.016 "num_base_bdevs_discovered": 4, 00:19:23.016 "num_base_bdevs_operational": 4, 00:19:23.016 "base_bdevs_list": [ 00:19:23.016 { 00:19:23.016 "name": "pt1", 00:19:23.016 "uuid": "b9d6b50c-9a69-565d-a924-efb7a60d7cd3", 00:19:23.016 "is_configured": true, 00:19:23.016 "data_offset": 2048, 00:19:23.016 "data_size": 63488 00:19:23.016 }, 00:19:23.016 { 00:19:23.016 "name": "pt2", 00:19:23.016 "uuid": "01b2a8ac-0a07-5162-aa63-f4f9f76617c1", 00:19:23.016 "is_configured": true, 00:19:23.016 "data_offset": 2048, 00:19:23.016 "data_size": 63488 00:19:23.016 }, 00:19:23.016 { 00:19:23.016 "name": "pt3", 00:19:23.016 "uuid": "0b07151a-3020-5b0d-9359-46ea31477a4b", 00:19:23.016 "is_configured": true, 00:19:23.016 "data_offset": 2048, 00:19:23.016 "data_size": 63488 00:19:23.016 }, 00:19:23.016 { 00:19:23.016 "name": "pt4", 00:19:23.016 "uuid": "fdd5fb66-080d-52d8-904a-6081544309b0", 00:19:23.016 "is_configured": true, 00:19:23.016 "data_offset": 2048, 00:19:23.016 "data_size": 63488 00:19:23.016 } 00:19:23.016 ] 00:19:23.016 }' 00:19:23.016 16:58:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:23.016 16:58:11 -- common/autotest_common.sh@10 -- # set +x 00:19:23.583 16:58:12 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:23.583 16:58:12 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:23.842 [2024-11-05 16:58:12.659407] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:23.842 16:58:12 -- bdev/bdev_raid.sh@430 -- # '[' 940bef80-4080-4ee4-8990-4823b5b6c5c8 '!=' 940bef80-4080-4ee4-8990-4823b5b6c5c8 ']' 00:19:23.842 16:58:12 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:19:23.842 16:58:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:23.842 16:58:12 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:23.842 16:58:12 -- bdev/bdev_raid.sh@511 -- # killprocess 120535 00:19:23.842 16:58:12 -- common/autotest_common.sh@936 -- # '[' -z 120535 ']' 00:19:23.842 16:58:12 -- common/autotest_common.sh@940 -- # kill -0 120535 00:19:23.842 16:58:12 -- common/autotest_common.sh@941 -- # uname 00:19:23.842 16:58:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:23.842 16:58:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120535 00:19:23.842 killing process with pid 120535 00:19:23.842 16:58:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:23.842 16:58:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:23.842 16:58:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120535' 00:19:23.842 16:58:12 -- common/autotest_common.sh@955 -- # kill 120535 00:19:23.842 [2024-11-05 16:58:12.694133] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:23.842 16:58:12 -- common/autotest_common.sh@960 -- # wait 120535 00:19:23.842 [2024-11-05 16:58:12.694204] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.842 [2024-11-05 16:58:12.694305] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.842 [2024-11-05 16:58:12.694316] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:19:24.101 [2024-11-05 16:58:12.958966] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:25.037 ************************************ 00:19:25.037 END TEST raid_superblock_test 00:19:25.037 ************************************ 00:19:25.037 16:58:13 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:25.037 00:19:25.037 real 0m11.399s 00:19:25.037 user 0m19.983s 00:19:25.037 sys 0m1.310s 00:19:25.037 16:58:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:25.037 16:58:13 -- common/autotest_common.sh@10 -- # set +x 00:19:25.037 16:58:13 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:25.037 16:58:13 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:19:25.037 16:58:13 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:19:25.037 16:58:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:25.037 16:58:13 -- common/autotest_common.sh@10 -- # set +x 00:19:25.037 ************************************ 00:19:25.037 START TEST raid_state_function_test 00:19:25.037 ************************************ 00:19:25.037 16:58:13 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 false 00:19:25.037 16:58:13 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:25.037 16:58:13 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:25.037 16:58:13 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:25.037 16:58:13 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:25.037 16:58:13 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:25.037 16:58:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:25.037 16:58:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:25.037 16:58:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:25.037 16:58:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:25.037 16:58:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:25.037 16:58:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:25.037 16:58:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@226 -- # raid_pid=120863 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120863' 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:25.296 Process raid pid: 120863 00:19:25.296 16:58:13 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120863 /var/tmp/spdk-raid.sock 00:19:25.296 16:58:13 -- common/autotest_common.sh@829 -- # '[' -z 120863 ']' 00:19:25.296 16:58:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:25.296 16:58:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.296 16:58:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:25.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:25.296 16:58:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.296 16:58:13 -- common/autotest_common.sh@10 -- # set +x 00:19:25.296 [2024-11-05 16:58:14.006233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:25.296 [2024-11-05 16:58:14.006689] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.296 [2024-11-05 16:58:14.174143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.554 [2024-11-05 16:58:14.333248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.811 [2024-11-05 16:58:14.504309] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.068 16:58:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.068 16:58:14 -- common/autotest_common.sh@862 -- # return 0 00:19:26.069 16:58:14 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:26.327 [2024-11-05 16:58:15.092586] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:26.327 [2024-11-05 16:58:15.092837] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:26.327 [2024-11-05 16:58:15.092942] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:26.327 [2024-11-05 16:58:15.093005] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:26.327 [2024-11-05 16:58:15.093095] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:26.327 [2024-11-05 16:58:15.093174] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:26.327 [2024-11-05 16:58:15.093314] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:26.327 [2024-11-05 16:58:15.093379] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:26.327 16:58:15 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:26.327 16:58:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:26.327 16:58:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:26.327 16:58:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:26.327 16:58:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:26.327 16:58:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:26.327 16:58:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:26.327 16:58:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:26.327 16:58:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:26.327 16:58:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:26.327 16:58:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.327 16:58:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.586 16:58:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:26.586 "name": "Existed_Raid", 00:19:26.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.586 "strip_size_kb": 0, 00:19:26.586 "state": "configuring", 00:19:26.586 "raid_level": "raid1", 00:19:26.586 "superblock": false, 00:19:26.586 "num_base_bdevs": 4, 00:19:26.586 "num_base_bdevs_discovered": 0, 00:19:26.586 "num_base_bdevs_operational": 4, 00:19:26.586 "base_bdevs_list": [ 00:19:26.586 { 00:19:26.586 "name": "BaseBdev1", 00:19:26.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.586 "is_configured": false, 00:19:26.586 "data_offset": 0, 00:19:26.586 "data_size": 0 00:19:26.586 }, 00:19:26.586 { 00:19:26.586 "name": "BaseBdev2", 00:19:26.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.586 "is_configured": false, 00:19:26.586 "data_offset": 0, 00:19:26.586 "data_size": 0 00:19:26.586 }, 00:19:26.586 { 00:19:26.586 "name": "BaseBdev3", 00:19:26.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.586 "is_configured": false, 00:19:26.586 "data_offset": 0, 00:19:26.586 "data_size": 0 00:19:26.586 }, 00:19:26.586 { 00:19:26.586 "name": "BaseBdev4", 00:19:26.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.586 "is_configured": false, 00:19:26.586 "data_offset": 0, 00:19:26.586 "data_size": 0 00:19:26.586 } 00:19:26.586 ] 00:19:26.586 }' 00:19:26.586 16:58:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:26.586 16:58:15 -- common/autotest_common.sh@10 -- # set +x 00:19:27.153 16:58:15 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:27.411 [2024-11-05 16:58:16.109015] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:27.411 [2024-11-05 16:58:16.109208] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:19:27.411 16:58:16 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:27.670 [2024-11-05 16:58:16.353071] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:27.670 [2024-11-05 16:58:16.353301] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:27.670 [2024-11-05 16:58:16.353420] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:27.670 [2024-11-05 16:58:16.353482] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:27.670 [2024-11-05 16:58:16.353571] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:27.670 [2024-11-05 16:58:16.353643] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:27.670 [2024-11-05 16:58:16.353676] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:27.670 [2024-11-05 16:58:16.353830] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:27.670 16:58:16 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:27.933 [2024-11-05 16:58:16.571366] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:27.933 BaseBdev1 00:19:27.933 16:58:16 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:27.933 16:58:16 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:27.933 16:58:16 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:27.933 16:58:16 -- common/autotest_common.sh@899 -- # local i 00:19:27.933 16:58:16 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:27.933 16:58:16 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:27.933 16:58:16 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:27.933 16:58:16 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:28.201 [ 00:19:28.201 { 00:19:28.201 "name": "BaseBdev1", 00:19:28.201 "aliases": [ 00:19:28.201 "156b46cf-7889-4fc6-9751-d251cb368488" 00:19:28.201 ], 00:19:28.201 "product_name": "Malloc disk", 00:19:28.201 "block_size": 512, 00:19:28.201 "num_blocks": 65536, 00:19:28.201 "uuid": "156b46cf-7889-4fc6-9751-d251cb368488", 00:19:28.201 "assigned_rate_limits": { 00:19:28.201 "rw_ios_per_sec": 0, 00:19:28.201 "rw_mbytes_per_sec": 0, 00:19:28.201 "r_mbytes_per_sec": 0, 00:19:28.201 "w_mbytes_per_sec": 0 00:19:28.201 }, 00:19:28.201 "claimed": true, 00:19:28.201 "claim_type": "exclusive_write", 00:19:28.201 "zoned": false, 00:19:28.201 "supported_io_types": { 00:19:28.201 "read": true, 00:19:28.201 "write": true, 00:19:28.201 "unmap": true, 00:19:28.201 "write_zeroes": true, 00:19:28.201 "flush": true, 00:19:28.201 "reset": true, 00:19:28.201 "compare": false, 00:19:28.201 "compare_and_write": false, 00:19:28.201 "abort": true, 00:19:28.201 "nvme_admin": false, 00:19:28.201 "nvme_io": false 00:19:28.201 }, 00:19:28.201 "memory_domains": [ 00:19:28.201 { 00:19:28.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.201 "dma_device_type": 2 00:19:28.201 } 00:19:28.201 ], 00:19:28.201 "driver_specific": {} 00:19:28.201 } 00:19:28.201 ] 00:19:28.201 16:58:16 -- common/autotest_common.sh@905 -- # return 0 00:19:28.201 16:58:16 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:28.201 16:58:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:28.201 16:58:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:28.201 16:58:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:28.201 16:58:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:28.201 16:58:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:28.201 16:58:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:28.201 16:58:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:28.201 16:58:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:28.201 16:58:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:28.201 16:58:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.201 16:58:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.460 16:58:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:28.460 "name": "Existed_Raid", 00:19:28.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.460 "strip_size_kb": 0, 00:19:28.460 "state": "configuring", 00:19:28.460 "raid_level": "raid1", 00:19:28.460 "superblock": false, 00:19:28.460 "num_base_bdevs": 4, 00:19:28.460 "num_base_bdevs_discovered": 1, 00:19:28.460 "num_base_bdevs_operational": 4, 00:19:28.460 "base_bdevs_list": [ 00:19:28.460 { 00:19:28.460 "name": "BaseBdev1", 00:19:28.460 "uuid": "156b46cf-7889-4fc6-9751-d251cb368488", 00:19:28.460 "is_configured": true, 00:19:28.460 "data_offset": 0, 00:19:28.460 "data_size": 65536 00:19:28.460 }, 00:19:28.460 { 00:19:28.460 "name": "BaseBdev2", 00:19:28.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.461 "is_configured": false, 00:19:28.461 "data_offset": 0, 00:19:28.461 "data_size": 0 00:19:28.461 }, 00:19:28.461 { 00:19:28.461 "name": "BaseBdev3", 00:19:28.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.461 "is_configured": false, 00:19:28.461 "data_offset": 0, 00:19:28.461 "data_size": 0 00:19:28.461 }, 00:19:28.461 { 00:19:28.461 "name": "BaseBdev4", 00:19:28.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.461 "is_configured": false, 00:19:28.461 "data_offset": 0, 00:19:28.461 "data_size": 0 00:19:28.461 } 00:19:28.461 ] 00:19:28.461 }' 00:19:28.461 16:58:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:28.461 16:58:17 -- common/autotest_common.sh@10 -- # set +x 00:19:29.028 16:58:17 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:29.287 [2024-11-05 16:58:18.003640] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:29.287 [2024-11-05 16:58:18.003868] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:29.287 16:58:18 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:29.287 16:58:18 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:29.546 [2024-11-05 16:58:18.259736] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:29.546 [2024-11-05 16:58:18.261755] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:29.546 [2024-11-05 16:58:18.261980] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:29.546 [2024-11-05 16:58:18.262094] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:29.546 [2024-11-05 16:58:18.262159] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:29.546 [2024-11-05 16:58:18.262261] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:29.546 [2024-11-05 16:58:18.262318] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:29.546 16:58:18 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:29.546 16:58:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:29.546 16:58:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:29.546 16:58:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:29.546 16:58:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:29.546 16:58:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:29.546 16:58:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:29.546 16:58:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:29.546 16:58:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:29.546 16:58:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:29.546 16:58:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:29.546 16:58:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:29.546 16:58:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.546 16:58:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.805 16:58:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:29.805 "name": "Existed_Raid", 00:19:29.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.805 "strip_size_kb": 0, 00:19:29.805 "state": "configuring", 00:19:29.805 "raid_level": "raid1", 00:19:29.805 "superblock": false, 00:19:29.805 "num_base_bdevs": 4, 00:19:29.805 "num_base_bdevs_discovered": 1, 00:19:29.805 "num_base_bdevs_operational": 4, 00:19:29.805 "base_bdevs_list": [ 00:19:29.805 { 00:19:29.805 "name": "BaseBdev1", 00:19:29.805 "uuid": "156b46cf-7889-4fc6-9751-d251cb368488", 00:19:29.805 "is_configured": true, 00:19:29.805 "data_offset": 0, 00:19:29.805 "data_size": 65536 00:19:29.805 }, 00:19:29.805 { 00:19:29.805 "name": "BaseBdev2", 00:19:29.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.805 "is_configured": false, 00:19:29.805 "data_offset": 0, 00:19:29.805 "data_size": 0 00:19:29.805 }, 00:19:29.805 { 00:19:29.805 "name": "BaseBdev3", 00:19:29.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.805 "is_configured": false, 00:19:29.805 "data_offset": 0, 00:19:29.805 "data_size": 0 00:19:29.805 }, 00:19:29.805 { 00:19:29.805 "name": "BaseBdev4", 00:19:29.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.805 "is_configured": false, 00:19:29.805 "data_offset": 0, 00:19:29.805 "data_size": 0 00:19:29.805 } 00:19:29.805 ] 00:19:29.805 }' 00:19:29.805 16:58:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:29.805 16:58:18 -- common/autotest_common.sh@10 -- # set +x 00:19:30.372 16:58:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:30.631 [2024-11-05 16:58:19.289931] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:30.631 BaseBdev2 00:19:30.631 16:58:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:30.631 16:58:19 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:30.631 16:58:19 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:30.631 16:58:19 -- common/autotest_common.sh@899 -- # local i 00:19:30.631 16:58:19 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:30.631 16:58:19 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:30.631 16:58:19 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:30.890 16:58:19 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:30.890 [ 00:19:30.890 { 00:19:30.890 "name": "BaseBdev2", 00:19:30.890 "aliases": [ 00:19:30.890 "386cee8f-b3ae-407f-99f0-ee21e2396284" 00:19:30.890 ], 00:19:30.890 "product_name": "Malloc disk", 00:19:30.890 "block_size": 512, 00:19:30.890 "num_blocks": 65536, 00:19:30.890 "uuid": "386cee8f-b3ae-407f-99f0-ee21e2396284", 00:19:30.890 "assigned_rate_limits": { 00:19:30.890 "rw_ios_per_sec": 0, 00:19:30.890 "rw_mbytes_per_sec": 0, 00:19:30.890 "r_mbytes_per_sec": 0, 00:19:30.890 "w_mbytes_per_sec": 0 00:19:30.890 }, 00:19:30.890 "claimed": true, 00:19:30.890 "claim_type": "exclusive_write", 00:19:30.890 "zoned": false, 00:19:30.890 "supported_io_types": { 00:19:30.890 "read": true, 00:19:30.890 "write": true, 00:19:30.890 "unmap": true, 00:19:30.890 "write_zeroes": true, 00:19:30.890 "flush": true, 00:19:30.890 "reset": true, 00:19:30.890 "compare": false, 00:19:30.890 "compare_and_write": false, 00:19:30.890 "abort": true, 00:19:30.890 "nvme_admin": false, 00:19:30.890 "nvme_io": false 00:19:30.890 }, 00:19:30.890 "memory_domains": [ 00:19:30.890 { 00:19:30.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.890 "dma_device_type": 2 00:19:30.890 } 00:19:30.890 ], 00:19:30.890 "driver_specific": {} 00:19:30.890 } 00:19:30.890 ] 00:19:31.154 16:58:19 -- common/autotest_common.sh@905 -- # return 0 00:19:31.154 16:58:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:31.154 16:58:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:31.154 16:58:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:31.154 16:58:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:31.154 16:58:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:31.154 16:58:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:31.154 16:58:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:31.154 16:58:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:31.154 16:58:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:31.154 16:58:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:31.154 16:58:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:31.154 16:58:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:31.154 16:58:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.154 16:58:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.154 16:58:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:31.154 "name": "Existed_Raid", 00:19:31.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.154 "strip_size_kb": 0, 00:19:31.154 "state": "configuring", 00:19:31.154 "raid_level": "raid1", 00:19:31.154 "superblock": false, 00:19:31.154 "num_base_bdevs": 4, 00:19:31.154 "num_base_bdevs_discovered": 2, 00:19:31.154 "num_base_bdevs_operational": 4, 00:19:31.154 "base_bdevs_list": [ 00:19:31.154 { 00:19:31.154 "name": "BaseBdev1", 00:19:31.154 "uuid": "156b46cf-7889-4fc6-9751-d251cb368488", 00:19:31.154 "is_configured": true, 00:19:31.154 "data_offset": 0, 00:19:31.154 "data_size": 65536 00:19:31.154 }, 00:19:31.154 { 00:19:31.154 "name": "BaseBdev2", 00:19:31.154 "uuid": "386cee8f-b3ae-407f-99f0-ee21e2396284", 00:19:31.154 "is_configured": true, 00:19:31.154 "data_offset": 0, 00:19:31.154 "data_size": 65536 00:19:31.154 }, 00:19:31.154 { 00:19:31.154 "name": "BaseBdev3", 00:19:31.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.154 "is_configured": false, 00:19:31.154 "data_offset": 0, 00:19:31.154 "data_size": 0 00:19:31.154 }, 00:19:31.154 { 00:19:31.154 "name": "BaseBdev4", 00:19:31.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.154 "is_configured": false, 00:19:31.154 "data_offset": 0, 00:19:31.154 "data_size": 0 00:19:31.154 } 00:19:31.154 ] 00:19:31.154 }' 00:19:31.154 16:58:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:31.154 16:58:19 -- common/autotest_common.sh@10 -- # set +x 00:19:31.721 16:58:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:31.979 [2024-11-05 16:58:20.849257] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:31.979 BaseBdev3 00:19:31.979 16:58:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:31.979 16:58:20 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:31.979 16:58:20 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:31.979 16:58:20 -- common/autotest_common.sh@899 -- # local i 00:19:31.979 16:58:20 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:31.979 16:58:20 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:31.979 16:58:20 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:32.238 16:58:21 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:32.497 [ 00:19:32.497 { 00:19:32.497 "name": "BaseBdev3", 00:19:32.497 "aliases": [ 00:19:32.497 "764a543a-9c1c-4ced-a561-90003967ec9a" 00:19:32.497 ], 00:19:32.497 "product_name": "Malloc disk", 00:19:32.497 "block_size": 512, 00:19:32.497 "num_blocks": 65536, 00:19:32.497 "uuid": "764a543a-9c1c-4ced-a561-90003967ec9a", 00:19:32.497 "assigned_rate_limits": { 00:19:32.497 "rw_ios_per_sec": 0, 00:19:32.497 "rw_mbytes_per_sec": 0, 00:19:32.497 "r_mbytes_per_sec": 0, 00:19:32.497 "w_mbytes_per_sec": 0 00:19:32.497 }, 00:19:32.497 "claimed": true, 00:19:32.497 "claim_type": "exclusive_write", 00:19:32.497 "zoned": false, 00:19:32.497 "supported_io_types": { 00:19:32.497 "read": true, 00:19:32.497 "write": true, 00:19:32.497 "unmap": true, 00:19:32.497 "write_zeroes": true, 00:19:32.497 "flush": true, 00:19:32.497 "reset": true, 00:19:32.497 "compare": false, 00:19:32.497 "compare_and_write": false, 00:19:32.497 "abort": true, 00:19:32.497 "nvme_admin": false, 00:19:32.497 "nvme_io": false 00:19:32.497 }, 00:19:32.497 "memory_domains": [ 00:19:32.497 { 00:19:32.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.497 "dma_device_type": 2 00:19:32.497 } 00:19:32.497 ], 00:19:32.497 "driver_specific": {} 00:19:32.497 } 00:19:32.497 ] 00:19:32.497 16:58:21 -- common/autotest_common.sh@905 -- # return 0 00:19:32.497 16:58:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:32.497 16:58:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:32.497 16:58:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:32.497 16:58:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:32.497 16:58:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:32.497 16:58:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:32.497 16:58:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:32.497 16:58:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:32.497 16:58:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:32.497 16:58:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:32.497 16:58:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:32.497 16:58:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:32.497 16:58:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.497 16:58:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.756 16:58:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:32.756 "name": "Existed_Raid", 00:19:32.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.756 "strip_size_kb": 0, 00:19:32.756 "state": "configuring", 00:19:32.756 "raid_level": "raid1", 00:19:32.756 "superblock": false, 00:19:32.756 "num_base_bdevs": 4, 00:19:32.756 "num_base_bdevs_discovered": 3, 00:19:32.756 "num_base_bdevs_operational": 4, 00:19:32.756 "base_bdevs_list": [ 00:19:32.756 { 00:19:32.756 "name": "BaseBdev1", 00:19:32.756 "uuid": "156b46cf-7889-4fc6-9751-d251cb368488", 00:19:32.756 "is_configured": true, 00:19:32.756 "data_offset": 0, 00:19:32.756 "data_size": 65536 00:19:32.756 }, 00:19:32.756 { 00:19:32.756 "name": "BaseBdev2", 00:19:32.756 "uuid": "386cee8f-b3ae-407f-99f0-ee21e2396284", 00:19:32.756 "is_configured": true, 00:19:32.756 "data_offset": 0, 00:19:32.756 "data_size": 65536 00:19:32.756 }, 00:19:32.756 { 00:19:32.756 "name": "BaseBdev3", 00:19:32.756 "uuid": "764a543a-9c1c-4ced-a561-90003967ec9a", 00:19:32.756 "is_configured": true, 00:19:32.756 "data_offset": 0, 00:19:32.756 "data_size": 65536 00:19:32.756 }, 00:19:32.756 { 00:19:32.756 "name": "BaseBdev4", 00:19:32.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.756 "is_configured": false, 00:19:32.756 "data_offset": 0, 00:19:32.756 "data_size": 0 00:19:32.756 } 00:19:32.756 ] 00:19:32.756 }' 00:19:32.756 16:58:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:32.756 16:58:21 -- common/autotest_common.sh@10 -- # set +x 00:19:33.692 16:58:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:33.692 [2024-11-05 16:58:22.497995] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:33.692 [2024-11-05 16:58:22.498329] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:19:33.692 [2024-11-05 16:58:22.498373] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:33.692 [2024-11-05 16:58:22.498628] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:33.692 [2024-11-05 16:58:22.499103] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:19:33.692 [2024-11-05 16:58:22.499252] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:19:33.692 [2024-11-05 16:58:22.499642] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.692 BaseBdev4 00:19:33.692 16:58:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:33.692 16:58:22 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:19:33.692 16:58:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:33.692 16:58:22 -- common/autotest_common.sh@899 -- # local i 00:19:33.692 16:58:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:33.692 16:58:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:33.692 16:58:22 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:33.950 16:58:22 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:34.209 [ 00:19:34.209 { 00:19:34.209 "name": "BaseBdev4", 00:19:34.209 "aliases": [ 00:19:34.209 "6e798b10-0e1b-4e63-b1d7-bc5a5404356d" 00:19:34.209 ], 00:19:34.209 "product_name": "Malloc disk", 00:19:34.209 "block_size": 512, 00:19:34.209 "num_blocks": 65536, 00:19:34.209 "uuid": "6e798b10-0e1b-4e63-b1d7-bc5a5404356d", 00:19:34.209 "assigned_rate_limits": { 00:19:34.209 "rw_ios_per_sec": 0, 00:19:34.209 "rw_mbytes_per_sec": 0, 00:19:34.209 "r_mbytes_per_sec": 0, 00:19:34.209 "w_mbytes_per_sec": 0 00:19:34.209 }, 00:19:34.209 "claimed": true, 00:19:34.209 "claim_type": "exclusive_write", 00:19:34.209 "zoned": false, 00:19:34.209 "supported_io_types": { 00:19:34.209 "read": true, 00:19:34.209 "write": true, 00:19:34.209 "unmap": true, 00:19:34.209 "write_zeroes": true, 00:19:34.209 "flush": true, 00:19:34.209 "reset": true, 00:19:34.209 "compare": false, 00:19:34.209 "compare_and_write": false, 00:19:34.209 "abort": true, 00:19:34.209 "nvme_admin": false, 00:19:34.209 "nvme_io": false 00:19:34.209 }, 00:19:34.209 "memory_domains": [ 00:19:34.209 { 00:19:34.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.209 "dma_device_type": 2 00:19:34.209 } 00:19:34.209 ], 00:19:34.209 "driver_specific": {} 00:19:34.209 } 00:19:34.209 ] 00:19:34.209 16:58:23 -- common/autotest_common.sh@905 -- # return 0 00:19:34.209 16:58:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:34.209 16:58:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:34.209 16:58:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:34.209 16:58:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:34.209 16:58:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:34.209 16:58:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:34.209 16:58:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:34.209 16:58:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:34.209 16:58:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:34.209 16:58:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:34.209 16:58:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:34.209 16:58:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:34.209 16:58:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.209 16:58:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.468 16:58:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:34.468 "name": "Existed_Raid", 00:19:34.468 "uuid": "34d5ef64-a802-417a-a2f3-1e4f9356334a", 00:19:34.468 "strip_size_kb": 0, 00:19:34.468 "state": "online", 00:19:34.468 "raid_level": "raid1", 00:19:34.468 "superblock": false, 00:19:34.468 "num_base_bdevs": 4, 00:19:34.468 "num_base_bdevs_discovered": 4, 00:19:34.468 "num_base_bdevs_operational": 4, 00:19:34.468 "base_bdevs_list": [ 00:19:34.468 { 00:19:34.468 "name": "BaseBdev1", 00:19:34.468 "uuid": "156b46cf-7889-4fc6-9751-d251cb368488", 00:19:34.468 "is_configured": true, 00:19:34.468 "data_offset": 0, 00:19:34.468 "data_size": 65536 00:19:34.468 }, 00:19:34.468 { 00:19:34.468 "name": "BaseBdev2", 00:19:34.468 "uuid": "386cee8f-b3ae-407f-99f0-ee21e2396284", 00:19:34.468 "is_configured": true, 00:19:34.468 "data_offset": 0, 00:19:34.468 "data_size": 65536 00:19:34.468 }, 00:19:34.468 { 00:19:34.468 "name": "BaseBdev3", 00:19:34.468 "uuid": "764a543a-9c1c-4ced-a561-90003967ec9a", 00:19:34.468 "is_configured": true, 00:19:34.468 "data_offset": 0, 00:19:34.468 "data_size": 65536 00:19:34.468 }, 00:19:34.468 { 00:19:34.468 "name": "BaseBdev4", 00:19:34.468 "uuid": "6e798b10-0e1b-4e63-b1d7-bc5a5404356d", 00:19:34.468 "is_configured": true, 00:19:34.468 "data_offset": 0, 00:19:34.468 "data_size": 65536 00:19:34.468 } 00:19:34.468 ] 00:19:34.468 }' 00:19:34.468 16:58:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:34.468 16:58:23 -- common/autotest_common.sh@10 -- # set +x 00:19:35.034 16:58:23 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:35.312 [2024-11-05 16:58:24.050327] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:35.312 16:58:24 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:35.312 16:58:24 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:35.312 16:58:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:35.312 16:58:24 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:35.312 16:58:24 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:35.312 16:58:24 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:35.312 16:58:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:35.312 16:58:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:35.312 16:58:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:35.313 16:58:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:35.313 16:58:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:35.313 16:58:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:35.313 16:58:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:35.313 16:58:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:35.313 16:58:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:35.313 16:58:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.313 16:58:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.571 16:58:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:35.571 "name": "Existed_Raid", 00:19:35.571 "uuid": "34d5ef64-a802-417a-a2f3-1e4f9356334a", 00:19:35.571 "strip_size_kb": 0, 00:19:35.571 "state": "online", 00:19:35.571 "raid_level": "raid1", 00:19:35.571 "superblock": false, 00:19:35.571 "num_base_bdevs": 4, 00:19:35.571 "num_base_bdevs_discovered": 3, 00:19:35.571 "num_base_bdevs_operational": 3, 00:19:35.571 "base_bdevs_list": [ 00:19:35.571 { 00:19:35.571 "name": null, 00:19:35.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.571 "is_configured": false, 00:19:35.571 "data_offset": 0, 00:19:35.571 "data_size": 65536 00:19:35.571 }, 00:19:35.571 { 00:19:35.571 "name": "BaseBdev2", 00:19:35.571 "uuid": "386cee8f-b3ae-407f-99f0-ee21e2396284", 00:19:35.571 "is_configured": true, 00:19:35.571 "data_offset": 0, 00:19:35.571 "data_size": 65536 00:19:35.571 }, 00:19:35.571 { 00:19:35.571 "name": "BaseBdev3", 00:19:35.571 "uuid": "764a543a-9c1c-4ced-a561-90003967ec9a", 00:19:35.571 "is_configured": true, 00:19:35.571 "data_offset": 0, 00:19:35.571 "data_size": 65536 00:19:35.571 }, 00:19:35.571 { 00:19:35.571 "name": "BaseBdev4", 00:19:35.571 "uuid": "6e798b10-0e1b-4e63-b1d7-bc5a5404356d", 00:19:35.571 "is_configured": true, 00:19:35.571 "data_offset": 0, 00:19:35.571 "data_size": 65536 00:19:35.571 } 00:19:35.571 ] 00:19:35.571 }' 00:19:35.571 16:58:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:35.571 16:58:24 -- common/autotest_common.sh@10 -- # set +x 00:19:36.508 16:58:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:36.508 16:58:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:36.508 16:58:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.508 16:58:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:36.508 16:58:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:36.508 16:58:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:36.508 16:58:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:36.767 [2024-11-05 16:58:25.558073] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:36.767 16:58:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:36.767 16:58:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:36.767 16:58:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.767 16:58:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:37.026 16:58:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:37.026 16:58:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:37.026 16:58:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:37.285 [2024-11-05 16:58:26.137212] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:37.543 16:58:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:37.543 16:58:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:37.544 16:58:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:37.544 16:58:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.544 16:58:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:37.544 16:58:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:37.544 16:58:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:37.810 [2024-11-05 16:58:26.594144] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:37.810 [2024-11-05 16:58:26.594342] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:37.810 [2024-11-05 16:58:26.594561] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.810 [2024-11-05 16:58:26.660252] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.810 [2024-11-05 16:58:26.660520] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:19:37.810 16:58:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:37.810 16:58:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:37.810 16:58:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:37.810 16:58:26 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.075 16:58:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:38.075 16:58:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:38.075 16:58:26 -- bdev/bdev_raid.sh@287 -- # killprocess 120863 00:19:38.075 16:58:26 -- common/autotest_common.sh@936 -- # '[' -z 120863 ']' 00:19:38.075 16:58:26 -- common/autotest_common.sh@940 -- # kill -0 120863 00:19:38.075 16:58:26 -- common/autotest_common.sh@941 -- # uname 00:19:38.075 16:58:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:38.075 16:58:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120863 00:19:38.075 16:58:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:38.075 16:58:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:38.075 16:58:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120863' 00:19:38.075 killing process with pid 120863 00:19:38.075 16:58:26 -- common/autotest_common.sh@955 -- # kill 120863 00:19:38.075 [2024-11-05 16:58:26.956574] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:38.076 16:58:26 -- common/autotest_common.sh@960 -- # wait 120863 00:19:38.076 [2024-11-05 16:58:26.956831] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:39.012 ************************************ 00:19:39.013 END TEST raid_state_function_test 00:19:39.013 ************************************ 00:19:39.013 16:58:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:39.013 00:19:39.013 real 0m13.946s 00:19:39.013 user 0m25.062s 00:19:39.013 sys 0m1.500s 00:19:39.013 16:58:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:39.013 16:58:27 -- common/autotest_common.sh@10 -- # set +x 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:19:39.272 16:58:27 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:19:39.272 16:58:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:39.272 16:58:27 -- common/autotest_common.sh@10 -- # set +x 00:19:39.272 ************************************ 00:19:39.272 START TEST raid_state_function_test_sb 00:19:39.272 ************************************ 00:19:39.272 16:58:27 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 true 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@226 -- # raid_pid=121295 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:39.272 Process raid pid: 121295 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121295' 00:19:39.272 16:58:27 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121295 /var/tmp/spdk-raid.sock 00:19:39.272 16:58:27 -- common/autotest_common.sh@829 -- # '[' -z 121295 ']' 00:19:39.272 16:58:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:39.272 16:58:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:39.272 16:58:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:39.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:39.272 16:58:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:39.272 16:58:27 -- common/autotest_common.sh@10 -- # set +x 00:19:39.272 [2024-11-05 16:58:27.997859] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:39.272 [2024-11-05 16:58:27.998197] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.272 [2024-11-05 16:58:28.157816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.530 [2024-11-05 16:58:28.369336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.789 [2024-11-05 16:58:28.541509] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:40.047 16:58:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:40.047 16:58:28 -- common/autotest_common.sh@862 -- # return 0 00:19:40.047 16:58:28 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:40.306 [2024-11-05 16:58:29.087425] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:40.306 [2024-11-05 16:58:29.087672] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:40.306 [2024-11-05 16:58:29.087836] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:40.306 [2024-11-05 16:58:29.087904] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:40.306 [2024-11-05 16:58:29.088061] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:40.306 [2024-11-05 16:58:29.088142] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:40.306 [2024-11-05 16:58:29.088244] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:40.306 [2024-11-05 16:58:29.088307] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:40.306 16:58:29 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:40.306 16:58:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:40.306 16:58:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:40.306 16:58:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:40.306 16:58:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:40.306 16:58:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:40.306 16:58:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:40.306 16:58:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:40.306 16:58:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:40.306 16:58:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:40.306 16:58:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.306 16:58:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.565 16:58:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:40.565 "name": "Existed_Raid", 00:19:40.565 "uuid": "7e95989e-85fe-4c28-8ed6-785d1b006d16", 00:19:40.565 "strip_size_kb": 0, 00:19:40.565 "state": "configuring", 00:19:40.565 "raid_level": "raid1", 00:19:40.565 "superblock": true, 00:19:40.565 "num_base_bdevs": 4, 00:19:40.565 "num_base_bdevs_discovered": 0, 00:19:40.565 "num_base_bdevs_operational": 4, 00:19:40.565 "base_bdevs_list": [ 00:19:40.565 { 00:19:40.565 "name": "BaseBdev1", 00:19:40.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.565 "is_configured": false, 00:19:40.565 "data_offset": 0, 00:19:40.565 "data_size": 0 00:19:40.565 }, 00:19:40.565 { 00:19:40.565 "name": "BaseBdev2", 00:19:40.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.565 "is_configured": false, 00:19:40.565 "data_offset": 0, 00:19:40.565 "data_size": 0 00:19:40.565 }, 00:19:40.565 { 00:19:40.565 "name": "BaseBdev3", 00:19:40.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.565 "is_configured": false, 00:19:40.565 "data_offset": 0, 00:19:40.565 "data_size": 0 00:19:40.565 }, 00:19:40.565 { 00:19:40.565 "name": "BaseBdev4", 00:19:40.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.565 "is_configured": false, 00:19:40.565 "data_offset": 0, 00:19:40.565 "data_size": 0 00:19:40.565 } 00:19:40.565 ] 00:19:40.565 }' 00:19:40.565 16:58:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:40.565 16:58:29 -- common/autotest_common.sh@10 -- # set +x 00:19:41.145 16:58:29 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:41.403 [2024-11-05 16:58:30.111476] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:41.403 [2024-11-05 16:58:30.111799] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:19:41.403 16:58:30 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:41.723 [2024-11-05 16:58:30.367589] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:41.723 [2024-11-05 16:58:30.367832] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:41.723 [2024-11-05 16:58:30.367937] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:41.723 [2024-11-05 16:58:30.368068] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:41.723 [2024-11-05 16:58:30.368163] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:41.723 [2024-11-05 16:58:30.368325] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:41.723 [2024-11-05 16:58:30.368459] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:41.723 [2024-11-05 16:58:30.368523] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:41.723 16:58:30 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:41.982 [2024-11-05 16:58:30.658008] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:41.982 BaseBdev1 00:19:41.982 16:58:30 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:41.982 16:58:30 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:41.982 16:58:30 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:41.982 16:58:30 -- common/autotest_common.sh@899 -- # local i 00:19:41.982 16:58:30 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:41.982 16:58:30 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:41.982 16:58:30 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:41.982 16:58:30 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:42.240 [ 00:19:42.240 { 00:19:42.240 "name": "BaseBdev1", 00:19:42.240 "aliases": [ 00:19:42.240 "ffffff41-575b-47ee-8d8e-bbebe82d8b63" 00:19:42.240 ], 00:19:42.240 "product_name": "Malloc disk", 00:19:42.240 "block_size": 512, 00:19:42.240 "num_blocks": 65536, 00:19:42.240 "uuid": "ffffff41-575b-47ee-8d8e-bbebe82d8b63", 00:19:42.240 "assigned_rate_limits": { 00:19:42.240 "rw_ios_per_sec": 0, 00:19:42.240 "rw_mbytes_per_sec": 0, 00:19:42.240 "r_mbytes_per_sec": 0, 00:19:42.241 "w_mbytes_per_sec": 0 00:19:42.241 }, 00:19:42.241 "claimed": true, 00:19:42.241 "claim_type": "exclusive_write", 00:19:42.241 "zoned": false, 00:19:42.241 "supported_io_types": { 00:19:42.241 "read": true, 00:19:42.241 "write": true, 00:19:42.241 "unmap": true, 00:19:42.241 "write_zeroes": true, 00:19:42.241 "flush": true, 00:19:42.241 "reset": true, 00:19:42.241 "compare": false, 00:19:42.241 "compare_and_write": false, 00:19:42.241 "abort": true, 00:19:42.241 "nvme_admin": false, 00:19:42.241 "nvme_io": false 00:19:42.241 }, 00:19:42.241 "memory_domains": [ 00:19:42.241 { 00:19:42.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.241 "dma_device_type": 2 00:19:42.241 } 00:19:42.241 ], 00:19:42.241 "driver_specific": {} 00:19:42.241 } 00:19:42.241 ] 00:19:42.241 16:58:31 -- common/autotest_common.sh@905 -- # return 0 00:19:42.241 16:58:31 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:42.241 16:58:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:42.241 16:58:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:42.241 16:58:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:42.241 16:58:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:42.241 16:58:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:42.241 16:58:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:42.241 16:58:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:42.241 16:58:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:42.241 16:58:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:42.241 16:58:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.241 16:58:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.499 16:58:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:42.499 "name": "Existed_Raid", 00:19:42.499 "uuid": "8e6f0bfe-92a2-4e6a-addf-38290c9700af", 00:19:42.499 "strip_size_kb": 0, 00:19:42.499 "state": "configuring", 00:19:42.499 "raid_level": "raid1", 00:19:42.499 "superblock": true, 00:19:42.499 "num_base_bdevs": 4, 00:19:42.499 "num_base_bdevs_discovered": 1, 00:19:42.499 "num_base_bdevs_operational": 4, 00:19:42.499 "base_bdevs_list": [ 00:19:42.499 { 00:19:42.499 "name": "BaseBdev1", 00:19:42.499 "uuid": "ffffff41-575b-47ee-8d8e-bbebe82d8b63", 00:19:42.499 "is_configured": true, 00:19:42.499 "data_offset": 2048, 00:19:42.499 "data_size": 63488 00:19:42.499 }, 00:19:42.499 { 00:19:42.499 "name": "BaseBdev2", 00:19:42.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.499 "is_configured": false, 00:19:42.499 "data_offset": 0, 00:19:42.499 "data_size": 0 00:19:42.499 }, 00:19:42.499 { 00:19:42.499 "name": "BaseBdev3", 00:19:42.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.499 "is_configured": false, 00:19:42.499 "data_offset": 0, 00:19:42.499 "data_size": 0 00:19:42.499 }, 00:19:42.499 { 00:19:42.499 "name": "BaseBdev4", 00:19:42.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.499 "is_configured": false, 00:19:42.499 "data_offset": 0, 00:19:42.499 "data_size": 0 00:19:42.499 } 00:19:42.499 ] 00:19:42.499 }' 00:19:42.500 16:58:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:42.500 16:58:31 -- common/autotest_common.sh@10 -- # set +x 00:19:43.066 16:58:31 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:43.325 [2024-11-05 16:58:32.138318] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:43.325 [2024-11-05 16:58:32.138539] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:43.325 16:58:32 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:43.325 16:58:32 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:43.583 16:58:32 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:43.842 BaseBdev1 00:19:43.842 16:58:32 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:43.842 16:58:32 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:43.842 16:58:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:43.842 16:58:32 -- common/autotest_common.sh@899 -- # local i 00:19:43.842 16:58:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:43.842 16:58:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:43.842 16:58:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:44.100 16:58:32 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:44.359 [ 00:19:44.359 { 00:19:44.359 "name": "BaseBdev1", 00:19:44.359 "aliases": [ 00:19:44.359 "c0cc2fe8-4f3e-4388-a3bb-fec31a9c1607" 00:19:44.359 ], 00:19:44.359 "product_name": "Malloc disk", 00:19:44.359 "block_size": 512, 00:19:44.359 "num_blocks": 65536, 00:19:44.359 "uuid": "c0cc2fe8-4f3e-4388-a3bb-fec31a9c1607", 00:19:44.359 "assigned_rate_limits": { 00:19:44.359 "rw_ios_per_sec": 0, 00:19:44.359 "rw_mbytes_per_sec": 0, 00:19:44.359 "r_mbytes_per_sec": 0, 00:19:44.359 "w_mbytes_per_sec": 0 00:19:44.359 }, 00:19:44.359 "claimed": false, 00:19:44.359 "zoned": false, 00:19:44.359 "supported_io_types": { 00:19:44.359 "read": true, 00:19:44.359 "write": true, 00:19:44.359 "unmap": true, 00:19:44.359 "write_zeroes": true, 00:19:44.359 "flush": true, 00:19:44.359 "reset": true, 00:19:44.359 "compare": false, 00:19:44.359 "compare_and_write": false, 00:19:44.359 "abort": true, 00:19:44.359 "nvme_admin": false, 00:19:44.359 "nvme_io": false 00:19:44.359 }, 00:19:44.359 "memory_domains": [ 00:19:44.359 { 00:19:44.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.359 "dma_device_type": 2 00:19:44.359 } 00:19:44.359 ], 00:19:44.359 "driver_specific": {} 00:19:44.359 } 00:19:44.359 ] 00:19:44.359 16:58:33 -- common/autotest_common.sh@905 -- # return 0 00:19:44.359 16:58:33 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:44.359 [2024-11-05 16:58:33.255893] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:44.618 [2024-11-05 16:58:33.257870] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:44.618 [2024-11-05 16:58:33.258086] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:44.618 [2024-11-05 16:58:33.258219] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:44.618 [2024-11-05 16:58:33.258283] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:44.618 [2024-11-05 16:58:33.258379] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:44.618 [2024-11-05 16:58:33.258435] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:44.618 16:58:33 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:44.618 16:58:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:44.618 16:58:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:44.618 16:58:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:44.618 16:58:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:44.618 16:58:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:44.618 16:58:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:44.618 16:58:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:44.618 16:58:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:44.618 16:58:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:44.618 16:58:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:44.618 16:58:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:44.618 16:58:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.618 16:58:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.618 16:58:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:44.618 "name": "Existed_Raid", 00:19:44.618 "uuid": "29ec72b6-08a0-4959-8a75-b3f0baccea67", 00:19:44.618 "strip_size_kb": 0, 00:19:44.618 "state": "configuring", 00:19:44.618 "raid_level": "raid1", 00:19:44.618 "superblock": true, 00:19:44.618 "num_base_bdevs": 4, 00:19:44.618 "num_base_bdevs_discovered": 1, 00:19:44.618 "num_base_bdevs_operational": 4, 00:19:44.618 "base_bdevs_list": [ 00:19:44.618 { 00:19:44.618 "name": "BaseBdev1", 00:19:44.618 "uuid": "c0cc2fe8-4f3e-4388-a3bb-fec31a9c1607", 00:19:44.619 "is_configured": true, 00:19:44.619 "data_offset": 2048, 00:19:44.619 "data_size": 63488 00:19:44.619 }, 00:19:44.619 { 00:19:44.619 "name": "BaseBdev2", 00:19:44.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.619 "is_configured": false, 00:19:44.619 "data_offset": 0, 00:19:44.619 "data_size": 0 00:19:44.619 }, 00:19:44.619 { 00:19:44.619 "name": "BaseBdev3", 00:19:44.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.619 "is_configured": false, 00:19:44.619 "data_offset": 0, 00:19:44.619 "data_size": 0 00:19:44.619 }, 00:19:44.619 { 00:19:44.619 "name": "BaseBdev4", 00:19:44.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.619 "is_configured": false, 00:19:44.619 "data_offset": 0, 00:19:44.619 "data_size": 0 00:19:44.619 } 00:19:44.619 ] 00:19:44.619 }' 00:19:44.619 16:58:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:44.619 16:58:33 -- common/autotest_common.sh@10 -- # set +x 00:19:45.554 16:58:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:45.554 [2024-11-05 16:58:34.317777] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:45.554 BaseBdev2 00:19:45.554 16:58:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:45.554 16:58:34 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:45.554 16:58:34 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:45.554 16:58:34 -- common/autotest_common.sh@899 -- # local i 00:19:45.554 16:58:34 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:45.554 16:58:34 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:45.554 16:58:34 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:45.813 16:58:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:46.072 [ 00:19:46.072 { 00:19:46.072 "name": "BaseBdev2", 00:19:46.072 "aliases": [ 00:19:46.072 "4a0f13c6-4eef-4403-bf71-f28490fe61a2" 00:19:46.072 ], 00:19:46.072 "product_name": "Malloc disk", 00:19:46.072 "block_size": 512, 00:19:46.072 "num_blocks": 65536, 00:19:46.072 "uuid": "4a0f13c6-4eef-4403-bf71-f28490fe61a2", 00:19:46.072 "assigned_rate_limits": { 00:19:46.072 "rw_ios_per_sec": 0, 00:19:46.072 "rw_mbytes_per_sec": 0, 00:19:46.072 "r_mbytes_per_sec": 0, 00:19:46.072 "w_mbytes_per_sec": 0 00:19:46.072 }, 00:19:46.072 "claimed": true, 00:19:46.072 "claim_type": "exclusive_write", 00:19:46.072 "zoned": false, 00:19:46.072 "supported_io_types": { 00:19:46.072 "read": true, 00:19:46.072 "write": true, 00:19:46.072 "unmap": true, 00:19:46.072 "write_zeroes": true, 00:19:46.072 "flush": true, 00:19:46.072 "reset": true, 00:19:46.072 "compare": false, 00:19:46.072 "compare_and_write": false, 00:19:46.072 "abort": true, 00:19:46.072 "nvme_admin": false, 00:19:46.072 "nvme_io": false 00:19:46.072 }, 00:19:46.072 "memory_domains": [ 00:19:46.072 { 00:19:46.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.072 "dma_device_type": 2 00:19:46.072 } 00:19:46.072 ], 00:19:46.072 "driver_specific": {} 00:19:46.072 } 00:19:46.072 ] 00:19:46.072 16:58:34 -- common/autotest_common.sh@905 -- # return 0 00:19:46.072 16:58:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:46.072 16:58:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:46.072 16:58:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:46.072 16:58:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:46.072 16:58:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:46.072 16:58:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:46.072 16:58:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:46.072 16:58:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:46.072 16:58:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:46.072 16:58:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:46.072 16:58:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:46.072 16:58:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:46.072 16:58:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.072 16:58:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.072 16:58:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:46.072 "name": "Existed_Raid", 00:19:46.072 "uuid": "29ec72b6-08a0-4959-8a75-b3f0baccea67", 00:19:46.072 "strip_size_kb": 0, 00:19:46.072 "state": "configuring", 00:19:46.072 "raid_level": "raid1", 00:19:46.072 "superblock": true, 00:19:46.072 "num_base_bdevs": 4, 00:19:46.072 "num_base_bdevs_discovered": 2, 00:19:46.072 "num_base_bdevs_operational": 4, 00:19:46.072 "base_bdevs_list": [ 00:19:46.072 { 00:19:46.072 "name": "BaseBdev1", 00:19:46.072 "uuid": "c0cc2fe8-4f3e-4388-a3bb-fec31a9c1607", 00:19:46.072 "is_configured": true, 00:19:46.072 "data_offset": 2048, 00:19:46.072 "data_size": 63488 00:19:46.072 }, 00:19:46.072 { 00:19:46.072 "name": "BaseBdev2", 00:19:46.072 "uuid": "4a0f13c6-4eef-4403-bf71-f28490fe61a2", 00:19:46.072 "is_configured": true, 00:19:46.072 "data_offset": 2048, 00:19:46.072 "data_size": 63488 00:19:46.072 }, 00:19:46.072 { 00:19:46.072 "name": "BaseBdev3", 00:19:46.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.072 "is_configured": false, 00:19:46.072 "data_offset": 0, 00:19:46.072 "data_size": 0 00:19:46.072 }, 00:19:46.072 { 00:19:46.072 "name": "BaseBdev4", 00:19:46.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.072 "is_configured": false, 00:19:46.072 "data_offset": 0, 00:19:46.072 "data_size": 0 00:19:46.072 } 00:19:46.072 ] 00:19:46.072 }' 00:19:46.072 16:58:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:46.072 16:58:34 -- common/autotest_common.sh@10 -- # set +x 00:19:47.008 16:58:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:47.009 [2024-11-05 16:58:35.762447] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:47.009 BaseBdev3 00:19:47.009 16:58:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:47.009 16:58:35 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:47.009 16:58:35 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:47.009 16:58:35 -- common/autotest_common.sh@899 -- # local i 00:19:47.009 16:58:35 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:47.009 16:58:35 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:47.009 16:58:35 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:47.269 16:58:35 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:47.269 [ 00:19:47.269 { 00:19:47.269 "name": "BaseBdev3", 00:19:47.269 "aliases": [ 00:19:47.269 "904a03a7-bf09-4db1-8077-95044894aa4e" 00:19:47.269 ], 00:19:47.269 "product_name": "Malloc disk", 00:19:47.269 "block_size": 512, 00:19:47.269 "num_blocks": 65536, 00:19:47.269 "uuid": "904a03a7-bf09-4db1-8077-95044894aa4e", 00:19:47.269 "assigned_rate_limits": { 00:19:47.269 "rw_ios_per_sec": 0, 00:19:47.269 "rw_mbytes_per_sec": 0, 00:19:47.269 "r_mbytes_per_sec": 0, 00:19:47.269 "w_mbytes_per_sec": 0 00:19:47.269 }, 00:19:47.269 "claimed": true, 00:19:47.269 "claim_type": "exclusive_write", 00:19:47.269 "zoned": false, 00:19:47.269 "supported_io_types": { 00:19:47.269 "read": true, 00:19:47.269 "write": true, 00:19:47.269 "unmap": true, 00:19:47.269 "write_zeroes": true, 00:19:47.269 "flush": true, 00:19:47.269 "reset": true, 00:19:47.269 "compare": false, 00:19:47.269 "compare_and_write": false, 00:19:47.269 "abort": true, 00:19:47.269 "nvme_admin": false, 00:19:47.269 "nvme_io": false 00:19:47.269 }, 00:19:47.269 "memory_domains": [ 00:19:47.269 { 00:19:47.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.269 "dma_device_type": 2 00:19:47.269 } 00:19:47.269 ], 00:19:47.269 "driver_specific": {} 00:19:47.269 } 00:19:47.269 ] 00:19:47.532 16:58:36 -- common/autotest_common.sh@905 -- # return 0 00:19:47.532 16:58:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:47.532 16:58:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:47.532 16:58:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:47.532 16:58:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:47.532 16:58:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:47.532 16:58:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:47.532 16:58:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:47.532 16:58:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:47.532 16:58:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:47.532 16:58:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:47.532 16:58:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:47.532 16:58:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:47.532 16:58:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.532 16:58:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.532 16:58:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:47.532 "name": "Existed_Raid", 00:19:47.532 "uuid": "29ec72b6-08a0-4959-8a75-b3f0baccea67", 00:19:47.532 "strip_size_kb": 0, 00:19:47.532 "state": "configuring", 00:19:47.532 "raid_level": "raid1", 00:19:47.532 "superblock": true, 00:19:47.532 "num_base_bdevs": 4, 00:19:47.532 "num_base_bdevs_discovered": 3, 00:19:47.532 "num_base_bdevs_operational": 4, 00:19:47.532 "base_bdevs_list": [ 00:19:47.532 { 00:19:47.532 "name": "BaseBdev1", 00:19:47.532 "uuid": "c0cc2fe8-4f3e-4388-a3bb-fec31a9c1607", 00:19:47.532 "is_configured": true, 00:19:47.532 "data_offset": 2048, 00:19:47.532 "data_size": 63488 00:19:47.532 }, 00:19:47.532 { 00:19:47.532 "name": "BaseBdev2", 00:19:47.532 "uuid": "4a0f13c6-4eef-4403-bf71-f28490fe61a2", 00:19:47.532 "is_configured": true, 00:19:47.532 "data_offset": 2048, 00:19:47.532 "data_size": 63488 00:19:47.532 }, 00:19:47.532 { 00:19:47.532 "name": "BaseBdev3", 00:19:47.532 "uuid": "904a03a7-bf09-4db1-8077-95044894aa4e", 00:19:47.532 "is_configured": true, 00:19:47.532 "data_offset": 2048, 00:19:47.532 "data_size": 63488 00:19:47.532 }, 00:19:47.532 { 00:19:47.532 "name": "BaseBdev4", 00:19:47.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.532 "is_configured": false, 00:19:47.532 "data_offset": 0, 00:19:47.532 "data_size": 0 00:19:47.532 } 00:19:47.532 ] 00:19:47.532 }' 00:19:47.532 16:58:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:47.532 16:58:36 -- common/autotest_common.sh@10 -- # set +x 00:19:48.100 16:58:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:48.358 [2024-11-05 16:58:37.142702] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:48.358 [2024-11-05 16:58:37.143226] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:19:48.358 [2024-11-05 16:58:37.143356] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:48.358 [2024-11-05 16:58:37.143521] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:19:48.358 BaseBdev4 00:19:48.358 [2024-11-05 16:58:37.144014] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:19:48.358 [2024-11-05 16:58:37.144030] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:19:48.358 [2024-11-05 16:58:37.144188] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.358 16:58:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:48.358 16:58:37 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:19:48.358 16:58:37 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:48.358 16:58:37 -- common/autotest_common.sh@899 -- # local i 00:19:48.358 16:58:37 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:48.358 16:58:37 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:48.358 16:58:37 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:48.617 16:58:37 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:48.876 [ 00:19:48.876 { 00:19:48.876 "name": "BaseBdev4", 00:19:48.876 "aliases": [ 00:19:48.876 "3f003138-b5b1-44d7-b44d-bb0d8cc6ebf7" 00:19:48.876 ], 00:19:48.876 "product_name": "Malloc disk", 00:19:48.876 "block_size": 512, 00:19:48.876 "num_blocks": 65536, 00:19:48.876 "uuid": "3f003138-b5b1-44d7-b44d-bb0d8cc6ebf7", 00:19:48.876 "assigned_rate_limits": { 00:19:48.876 "rw_ios_per_sec": 0, 00:19:48.876 "rw_mbytes_per_sec": 0, 00:19:48.876 "r_mbytes_per_sec": 0, 00:19:48.876 "w_mbytes_per_sec": 0 00:19:48.876 }, 00:19:48.876 "claimed": true, 00:19:48.876 "claim_type": "exclusive_write", 00:19:48.876 "zoned": false, 00:19:48.876 "supported_io_types": { 00:19:48.876 "read": true, 00:19:48.876 "write": true, 00:19:48.876 "unmap": true, 00:19:48.876 "write_zeroes": true, 00:19:48.876 "flush": true, 00:19:48.876 "reset": true, 00:19:48.876 "compare": false, 00:19:48.876 "compare_and_write": false, 00:19:48.876 "abort": true, 00:19:48.876 "nvme_admin": false, 00:19:48.876 "nvme_io": false 00:19:48.876 }, 00:19:48.876 "memory_domains": [ 00:19:48.876 { 00:19:48.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.876 "dma_device_type": 2 00:19:48.876 } 00:19:48.876 ], 00:19:48.876 "driver_specific": {} 00:19:48.876 } 00:19:48.876 ] 00:19:48.876 16:58:37 -- common/autotest_common.sh@905 -- # return 0 00:19:48.876 16:58:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:48.876 16:58:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:48.876 16:58:37 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:48.876 16:58:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:48.876 16:58:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:48.876 16:58:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:48.876 16:58:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:48.876 16:58:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:48.876 16:58:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:48.876 16:58:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:48.876 16:58:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:48.876 16:58:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:48.876 16:58:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.876 16:58:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.135 16:58:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:49.135 "name": "Existed_Raid", 00:19:49.135 "uuid": "29ec72b6-08a0-4959-8a75-b3f0baccea67", 00:19:49.135 "strip_size_kb": 0, 00:19:49.135 "state": "online", 00:19:49.135 "raid_level": "raid1", 00:19:49.135 "superblock": true, 00:19:49.135 "num_base_bdevs": 4, 00:19:49.135 "num_base_bdevs_discovered": 4, 00:19:49.135 "num_base_bdevs_operational": 4, 00:19:49.135 "base_bdevs_list": [ 00:19:49.135 { 00:19:49.135 "name": "BaseBdev1", 00:19:49.135 "uuid": "c0cc2fe8-4f3e-4388-a3bb-fec31a9c1607", 00:19:49.135 "is_configured": true, 00:19:49.135 "data_offset": 2048, 00:19:49.135 "data_size": 63488 00:19:49.135 }, 00:19:49.135 { 00:19:49.135 "name": "BaseBdev2", 00:19:49.135 "uuid": "4a0f13c6-4eef-4403-bf71-f28490fe61a2", 00:19:49.135 "is_configured": true, 00:19:49.135 "data_offset": 2048, 00:19:49.135 "data_size": 63488 00:19:49.135 }, 00:19:49.135 { 00:19:49.135 "name": "BaseBdev3", 00:19:49.135 "uuid": "904a03a7-bf09-4db1-8077-95044894aa4e", 00:19:49.135 "is_configured": true, 00:19:49.135 "data_offset": 2048, 00:19:49.135 "data_size": 63488 00:19:49.135 }, 00:19:49.135 { 00:19:49.135 "name": "BaseBdev4", 00:19:49.135 "uuid": "3f003138-b5b1-44d7-b44d-bb0d8cc6ebf7", 00:19:49.135 "is_configured": true, 00:19:49.135 "data_offset": 2048, 00:19:49.135 "data_size": 63488 00:19:49.135 } 00:19:49.135 ] 00:19:49.135 }' 00:19:49.135 16:58:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:49.135 16:58:37 -- common/autotest_common.sh@10 -- # set +x 00:19:49.702 16:58:38 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:49.961 [2024-11-05 16:58:38.679078] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:49.961 16:58:38 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:49.961 16:58:38 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:49.961 16:58:38 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:49.961 16:58:38 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:49.961 16:58:38 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:49.961 16:58:38 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:49.961 16:58:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:49.961 16:58:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:49.961 16:58:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:49.961 16:58:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:49.961 16:58:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:49.961 16:58:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:49.961 16:58:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:49.961 16:58:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:49.961 16:58:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:49.961 16:58:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.961 16:58:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.220 16:58:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:50.220 "name": "Existed_Raid", 00:19:50.220 "uuid": "29ec72b6-08a0-4959-8a75-b3f0baccea67", 00:19:50.220 "strip_size_kb": 0, 00:19:50.220 "state": "online", 00:19:50.220 "raid_level": "raid1", 00:19:50.220 "superblock": true, 00:19:50.220 "num_base_bdevs": 4, 00:19:50.220 "num_base_bdevs_discovered": 3, 00:19:50.220 "num_base_bdevs_operational": 3, 00:19:50.220 "base_bdevs_list": [ 00:19:50.220 { 00:19:50.220 "name": null, 00:19:50.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.220 "is_configured": false, 00:19:50.220 "data_offset": 2048, 00:19:50.220 "data_size": 63488 00:19:50.220 }, 00:19:50.220 { 00:19:50.220 "name": "BaseBdev2", 00:19:50.220 "uuid": "4a0f13c6-4eef-4403-bf71-f28490fe61a2", 00:19:50.220 "is_configured": true, 00:19:50.220 "data_offset": 2048, 00:19:50.220 "data_size": 63488 00:19:50.220 }, 00:19:50.220 { 00:19:50.220 "name": "BaseBdev3", 00:19:50.220 "uuid": "904a03a7-bf09-4db1-8077-95044894aa4e", 00:19:50.220 "is_configured": true, 00:19:50.220 "data_offset": 2048, 00:19:50.220 "data_size": 63488 00:19:50.220 }, 00:19:50.220 { 00:19:50.220 "name": "BaseBdev4", 00:19:50.220 "uuid": "3f003138-b5b1-44d7-b44d-bb0d8cc6ebf7", 00:19:50.220 "is_configured": true, 00:19:50.220 "data_offset": 2048, 00:19:50.220 "data_size": 63488 00:19:50.220 } 00:19:50.220 ] 00:19:50.220 }' 00:19:50.220 16:58:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:50.220 16:58:39 -- common/autotest_common.sh@10 -- # set +x 00:19:50.787 16:58:39 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:50.787 16:58:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:50.787 16:58:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.787 16:58:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:51.045 16:58:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:51.045 16:58:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:51.045 16:58:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:51.303 [2024-11-05 16:58:40.127362] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:51.562 16:58:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:51.562 16:58:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:51.562 16:58:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.562 16:58:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:51.562 16:58:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:51.562 16:58:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:51.562 16:58:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:51.820 [2024-11-05 16:58:40.679012] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:52.079 16:58:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:52.079 16:58:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:52.079 16:58:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.079 16:58:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:52.079 16:58:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:52.079 16:58:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:52.079 16:58:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:52.338 [2024-11-05 16:58:41.147335] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:52.338 [2024-11-05 16:58:41.147526] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:52.338 [2024-11-05 16:58:41.147689] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:52.338 [2024-11-05 16:58:41.213081] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:52.338 [2024-11-05 16:58:41.213348] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:19:52.338 16:58:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:52.338 16:58:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:52.338 16:58:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.338 16:58:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:52.596 16:58:41 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:52.596 16:58:41 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:52.596 16:58:41 -- bdev/bdev_raid.sh@287 -- # killprocess 121295 00:19:52.596 16:58:41 -- common/autotest_common.sh@936 -- # '[' -z 121295 ']' 00:19:52.597 16:58:41 -- common/autotest_common.sh@940 -- # kill -0 121295 00:19:52.597 16:58:41 -- common/autotest_common.sh@941 -- # uname 00:19:52.597 16:58:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:52.597 16:58:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121295 00:19:52.597 killing process with pid 121295 00:19:52.597 16:58:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:52.597 16:58:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:52.597 16:58:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121295' 00:19:52.597 16:58:41 -- common/autotest_common.sh@955 -- # kill 121295 00:19:52.597 16:58:41 -- common/autotest_common.sh@960 -- # wait 121295 00:19:52.597 [2024-11-05 16:58:41.463910] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:52.597 [2024-11-05 16:58:41.464006] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:53.532 ************************************ 00:19:53.532 END TEST raid_state_function_test_sb 00:19:53.532 ************************************ 00:19:53.532 16:58:42 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:53.532 00:19:53.532 real 0m14.457s 00:19:53.532 user 0m25.768s 00:19:53.532 sys 0m1.727s 00:19:53.532 16:58:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:53.532 16:58:42 -- common/autotest_common.sh@10 -- # set +x 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:19:53.791 16:58:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:53.791 16:58:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:53.791 16:58:42 -- common/autotest_common.sh@10 -- # set +x 00:19:53.791 ************************************ 00:19:53.791 START TEST raid_superblock_test 00:19:53.791 ************************************ 00:19:53.791 16:58:42 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 4 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@357 -- # raid_pid=121744 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@358 -- # waitforlisten 121744 /var/tmp/spdk-raid.sock 00:19:53.791 16:58:42 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:53.791 16:58:42 -- common/autotest_common.sh@829 -- # '[' -z 121744 ']' 00:19:53.791 16:58:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:53.791 16:58:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:53.791 16:58:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:53.791 16:58:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.791 16:58:42 -- common/autotest_common.sh@10 -- # set +x 00:19:53.791 [2024-11-05 16:58:42.516372] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:53.791 [2024-11-05 16:58:42.516821] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121744 ] 00:19:53.791 [2024-11-05 16:58:42.688744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.050 [2024-11-05 16:58:42.894082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.308 [2024-11-05 16:58:43.058478] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.566 16:58:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:54.566 16:58:43 -- common/autotest_common.sh@862 -- # return 0 00:19:54.566 16:58:43 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:54.566 16:58:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:54.566 16:58:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:54.566 16:58:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:54.566 16:58:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:54.567 16:58:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:54.567 16:58:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:54.567 16:58:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:54.567 16:58:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:54.825 malloc1 00:19:54.825 16:58:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:55.083 [2024-11-05 16:58:43.834150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:55.083 [2024-11-05 16:58:43.834394] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.083 [2024-11-05 16:58:43.834536] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:19:55.083 [2024-11-05 16:58:43.834687] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.083 [2024-11-05 16:58:43.837107] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.083 [2024-11-05 16:58:43.837282] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:55.083 pt1 00:19:55.083 16:58:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:55.083 16:58:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:55.083 16:58:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:55.083 16:58:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:55.083 16:58:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:55.083 16:58:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:55.083 16:58:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:55.083 16:58:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:55.083 16:58:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:55.341 malloc2 00:19:55.341 16:58:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:55.600 [2024-11-05 16:58:44.261676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:55.600 [2024-11-05 16:58:44.261918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.600 [2024-11-05 16:58:44.262005] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:55.600 [2024-11-05 16:58:44.262247] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.600 [2024-11-05 16:58:44.264617] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.600 [2024-11-05 16:58:44.264809] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:55.600 pt2 00:19:55.600 16:58:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:55.600 16:58:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:55.600 16:58:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:55.600 16:58:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:55.600 16:58:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:55.600 16:58:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:55.600 16:58:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:55.600 16:58:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:55.600 16:58:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:55.600 malloc3 00:19:55.869 16:58:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:55.869 [2024-11-05 16:58:44.747817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:55.869 [2024-11-05 16:58:44.748049] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.869 [2024-11-05 16:58:44.748130] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:19:55.869 [2024-11-05 16:58:44.748277] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.869 [2024-11-05 16:58:44.750641] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.869 [2024-11-05 16:58:44.750835] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:55.869 pt3 00:19:55.869 16:58:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:55.869 16:58:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:55.869 16:58:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:55.869 16:58:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:55.869 16:58:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:55.869 16:58:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:55.869 16:58:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:55.869 16:58:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:55.869 16:58:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:56.131 malloc4 00:19:56.131 16:58:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:56.390 [2024-11-05 16:58:45.157564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:56.390 [2024-11-05 16:58:45.157802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.390 [2024-11-05 16:58:45.157873] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:56.390 [2024-11-05 16:58:45.158079] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.390 [2024-11-05 16:58:45.160327] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.390 [2024-11-05 16:58:45.160531] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:56.390 pt4 00:19:56.390 16:58:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:56.390 16:58:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:56.390 16:58:45 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:56.649 [2024-11-05 16:58:45.345641] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:56.649 [2024-11-05 16:58:45.347677] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:56.649 [2024-11-05 16:58:45.347902] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:56.649 [2024-11-05 16:58:45.348066] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:56.649 [2024-11-05 16:58:45.348318] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:19:56.649 [2024-11-05 16:58:45.348443] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:56.649 [2024-11-05 16:58:45.348628] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:19:56.649 [2024-11-05 16:58:45.349101] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:19:56.649 [2024-11-05 16:58:45.349251] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:19:56.649 [2024-11-05 16:58:45.349471] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.649 16:58:45 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:56.649 16:58:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:56.649 16:58:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:56.649 16:58:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:56.649 16:58:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:56.649 16:58:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:56.649 16:58:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:56.649 16:58:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:56.649 16:58:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:56.649 16:58:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:56.649 16:58:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.649 16:58:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.908 16:58:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:56.908 "name": "raid_bdev1", 00:19:56.908 "uuid": "a8f289dc-1339-4903-abe0-8629f3a35716", 00:19:56.908 "strip_size_kb": 0, 00:19:56.908 "state": "online", 00:19:56.908 "raid_level": "raid1", 00:19:56.908 "superblock": true, 00:19:56.908 "num_base_bdevs": 4, 00:19:56.908 "num_base_bdevs_discovered": 4, 00:19:56.908 "num_base_bdevs_operational": 4, 00:19:56.908 "base_bdevs_list": [ 00:19:56.908 { 00:19:56.908 "name": "pt1", 00:19:56.908 "uuid": "b41c7e5c-6074-54e7-b011-b6c9fca22a22", 00:19:56.908 "is_configured": true, 00:19:56.908 "data_offset": 2048, 00:19:56.908 "data_size": 63488 00:19:56.908 }, 00:19:56.908 { 00:19:56.908 "name": "pt2", 00:19:56.908 "uuid": "d99490d6-a997-5b26-9cf4-9cc3b7cbd9f0", 00:19:56.908 "is_configured": true, 00:19:56.908 "data_offset": 2048, 00:19:56.908 "data_size": 63488 00:19:56.908 }, 00:19:56.908 { 00:19:56.908 "name": "pt3", 00:19:56.908 "uuid": "a0001ced-ecdc-55e4-926c-0af3e2a29bec", 00:19:56.908 "is_configured": true, 00:19:56.908 "data_offset": 2048, 00:19:56.908 "data_size": 63488 00:19:56.908 }, 00:19:56.908 { 00:19:56.908 "name": "pt4", 00:19:56.908 "uuid": "b7520a16-6552-51fd-87ba-1ef2f88d1980", 00:19:56.908 "is_configured": true, 00:19:56.908 "data_offset": 2048, 00:19:56.908 "data_size": 63488 00:19:56.908 } 00:19:56.908 ] 00:19:56.908 }' 00:19:56.908 16:58:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:56.908 16:58:45 -- common/autotest_common.sh@10 -- # set +x 00:19:57.475 16:58:46 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:57.475 16:58:46 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:57.734 [2024-11-05 16:58:46.414068] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:57.734 16:58:46 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=a8f289dc-1339-4903-abe0-8629f3a35716 00:19:57.734 16:58:46 -- bdev/bdev_raid.sh@380 -- # '[' -z a8f289dc-1339-4903-abe0-8629f3a35716 ']' 00:19:57.734 16:58:46 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:57.993 [2024-11-05 16:58:46.669903] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:57.993 [2024-11-05 16:58:46.670086] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:57.993 [2024-11-05 16:58:46.670244] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:57.993 [2024-11-05 16:58:46.670485] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:57.993 [2024-11-05 16:58:46.670603] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:19:57.993 16:58:46 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.993 16:58:46 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:57.993 16:58:46 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:57.993 16:58:46 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:57.993 16:58:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:57.993 16:58:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:58.251 16:58:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:58.251 16:58:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:58.509 16:58:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:58.509 16:58:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:58.768 16:58:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:58.768 16:58:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:59.026 16:58:47 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:59.026 16:58:47 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:59.286 16:58:47 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:59.286 16:58:47 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:59.286 16:58:47 -- common/autotest_common.sh@650 -- # local es=0 00:19:59.286 16:58:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:59.286 16:58:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.286 16:58:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:59.286 16:58:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.286 16:58:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:59.286 16:58:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.286 16:58:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:59.286 16:58:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.286 16:58:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:59.286 16:58:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:59.286 [2024-11-05 16:58:48.146096] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:59.286 [2024-11-05 16:58:48.148112] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:59.286 [2024-11-05 16:58:48.148326] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:59.286 [2024-11-05 16:58:48.148405] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:59.286 [2024-11-05 16:58:48.148591] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:59.286 [2024-11-05 16:58:48.148787] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:59.286 [2024-11-05 16:58:48.148921] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:59.286 [2024-11-05 16:58:48.149077] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:59.286 [2024-11-05 16:58:48.149194] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:59.286 [2024-11-05 16:58:48.149236] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:19:59.286 request: 00:19:59.286 { 00:19:59.286 "name": "raid_bdev1", 00:19:59.286 "raid_level": "raid1", 00:19:59.286 "base_bdevs": [ 00:19:59.286 "malloc1", 00:19:59.286 "malloc2", 00:19:59.286 "malloc3", 00:19:59.286 "malloc4" 00:19:59.286 ], 00:19:59.286 "superblock": false, 00:19:59.286 "method": "bdev_raid_create", 00:19:59.286 "req_id": 1 00:19:59.286 } 00:19:59.286 Got JSON-RPC error response 00:19:59.286 response: 00:19:59.286 { 00:19:59.286 "code": -17, 00:19:59.286 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:59.286 } 00:19:59.286 16:58:48 -- common/autotest_common.sh@653 -- # es=1 00:19:59.286 16:58:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:59.286 16:58:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:59.286 16:58:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:59.286 16:58:48 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:59.286 16:58:48 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.545 16:58:48 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:59.545 16:58:48 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:59.545 16:58:48 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:59.809 [2024-11-05 16:58:48.586100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:59.809 [2024-11-05 16:58:48.586340] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.809 [2024-11-05 16:58:48.586411] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:59.809 [2024-11-05 16:58:48.586653] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.809 [2024-11-05 16:58:48.589011] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.809 [2024-11-05 16:58:48.589215] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:59.809 [2024-11-05 16:58:48.589435] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:59.809 [2024-11-05 16:58:48.589600] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:59.809 pt1 00:19:59.809 16:58:48 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:59.809 16:58:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:59.809 16:58:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:59.809 16:58:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:59.809 16:58:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:59.809 16:58:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:59.809 16:58:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:59.809 16:58:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:59.809 16:58:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:59.809 16:58:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:59.809 16:58:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.809 16:58:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.069 16:58:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:00.069 "name": "raid_bdev1", 00:20:00.069 "uuid": "a8f289dc-1339-4903-abe0-8629f3a35716", 00:20:00.069 "strip_size_kb": 0, 00:20:00.069 "state": "configuring", 00:20:00.069 "raid_level": "raid1", 00:20:00.069 "superblock": true, 00:20:00.069 "num_base_bdevs": 4, 00:20:00.069 "num_base_bdevs_discovered": 1, 00:20:00.069 "num_base_bdevs_operational": 4, 00:20:00.069 "base_bdevs_list": [ 00:20:00.069 { 00:20:00.069 "name": "pt1", 00:20:00.069 "uuid": "b41c7e5c-6074-54e7-b011-b6c9fca22a22", 00:20:00.069 "is_configured": true, 00:20:00.069 "data_offset": 2048, 00:20:00.069 "data_size": 63488 00:20:00.069 }, 00:20:00.069 { 00:20:00.069 "name": null, 00:20:00.069 "uuid": "d99490d6-a997-5b26-9cf4-9cc3b7cbd9f0", 00:20:00.069 "is_configured": false, 00:20:00.069 "data_offset": 2048, 00:20:00.069 "data_size": 63488 00:20:00.069 }, 00:20:00.069 { 00:20:00.069 "name": null, 00:20:00.069 "uuid": "a0001ced-ecdc-55e4-926c-0af3e2a29bec", 00:20:00.069 "is_configured": false, 00:20:00.069 "data_offset": 2048, 00:20:00.069 "data_size": 63488 00:20:00.069 }, 00:20:00.069 { 00:20:00.069 "name": null, 00:20:00.069 "uuid": "b7520a16-6552-51fd-87ba-1ef2f88d1980", 00:20:00.069 "is_configured": false, 00:20:00.069 "data_offset": 2048, 00:20:00.069 "data_size": 63488 00:20:00.069 } 00:20:00.069 ] 00:20:00.069 }' 00:20:00.069 16:58:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:00.069 16:58:48 -- common/autotest_common.sh@10 -- # set +x 00:20:00.636 16:58:49 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:20:00.636 16:58:49 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:00.893 [2024-11-05 16:58:49.674340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:00.893 [2024-11-05 16:58:49.674591] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.894 [2024-11-05 16:58:49.674687] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:00.894 [2024-11-05 16:58:49.674949] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.894 [2024-11-05 16:58:49.675583] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.894 [2024-11-05 16:58:49.675777] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:00.894 [2024-11-05 16:58:49.675977] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:00.894 [2024-11-05 16:58:49.676125] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:00.894 pt2 00:20:00.894 16:58:49 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:01.151 [2024-11-05 16:58:49.866383] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:01.151 16:58:49 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:01.151 16:58:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:01.151 16:58:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:01.151 16:58:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:01.151 16:58:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:01.151 16:58:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:01.151 16:58:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:01.151 16:58:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:01.151 16:58:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:01.151 16:58:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:01.151 16:58:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.151 16:58:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.410 16:58:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:01.410 "name": "raid_bdev1", 00:20:01.410 "uuid": "a8f289dc-1339-4903-abe0-8629f3a35716", 00:20:01.410 "strip_size_kb": 0, 00:20:01.410 "state": "configuring", 00:20:01.410 "raid_level": "raid1", 00:20:01.410 "superblock": true, 00:20:01.410 "num_base_bdevs": 4, 00:20:01.410 "num_base_bdevs_discovered": 1, 00:20:01.410 "num_base_bdevs_operational": 4, 00:20:01.410 "base_bdevs_list": [ 00:20:01.410 { 00:20:01.410 "name": "pt1", 00:20:01.410 "uuid": "b41c7e5c-6074-54e7-b011-b6c9fca22a22", 00:20:01.410 "is_configured": true, 00:20:01.410 "data_offset": 2048, 00:20:01.410 "data_size": 63488 00:20:01.410 }, 00:20:01.410 { 00:20:01.410 "name": null, 00:20:01.410 "uuid": "d99490d6-a997-5b26-9cf4-9cc3b7cbd9f0", 00:20:01.410 "is_configured": false, 00:20:01.410 "data_offset": 2048, 00:20:01.410 "data_size": 63488 00:20:01.410 }, 00:20:01.410 { 00:20:01.410 "name": null, 00:20:01.410 "uuid": "a0001ced-ecdc-55e4-926c-0af3e2a29bec", 00:20:01.410 "is_configured": false, 00:20:01.410 "data_offset": 2048, 00:20:01.410 "data_size": 63488 00:20:01.410 }, 00:20:01.410 { 00:20:01.410 "name": null, 00:20:01.410 "uuid": "b7520a16-6552-51fd-87ba-1ef2f88d1980", 00:20:01.410 "is_configured": false, 00:20:01.410 "data_offset": 2048, 00:20:01.410 "data_size": 63488 00:20:01.410 } 00:20:01.410 ] 00:20:01.410 }' 00:20:01.410 16:58:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:01.410 16:58:50 -- common/autotest_common.sh@10 -- # set +x 00:20:01.977 16:58:50 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:20:01.977 16:58:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:01.977 16:58:50 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:02.236 [2024-11-05 16:58:50.883543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:02.236 [2024-11-05 16:58:50.884232] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.236 [2024-11-05 16:58:50.884561] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:02.236 [2024-11-05 16:58:50.884817] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.236 [2024-11-05 16:58:50.885578] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.236 [2024-11-05 16:58:50.885890] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:02.236 [2024-11-05 16:58:50.886218] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:02.236 [2024-11-05 16:58:50.886392] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:02.236 pt2 00:20:02.236 16:58:50 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:02.236 16:58:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:02.236 16:58:50 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:02.494 [2024-11-05 16:58:51.135536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:02.494 [2024-11-05 16:58:51.135918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.494 [2024-11-05 16:58:51.136189] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:02.494 [2024-11-05 16:58:51.136437] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.494 [2024-11-05 16:58:51.137121] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.494 [2024-11-05 16:58:51.137426] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:02.494 [2024-11-05 16:58:51.137743] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:02.494 [2024-11-05 16:58:51.137913] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:02.494 pt3 00:20:02.494 16:58:51 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:02.494 16:58:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:02.494 16:58:51 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:02.494 [2024-11-05 16:58:51.339589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:02.494 [2024-11-05 16:58:51.339932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.494 [2024-11-05 16:58:51.340202] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:02.494 [2024-11-05 16:58:51.340461] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.494 [2024-11-05 16:58:51.341122] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.494 [2024-11-05 16:58:51.341411] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:02.494 [2024-11-05 16:58:51.341731] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:02.494 [2024-11-05 16:58:51.341899] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:02.494 [2024-11-05 16:58:51.342134] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:20:02.494 [2024-11-05 16:58:51.342247] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:02.494 [2024-11-05 16:58:51.342395] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:02.495 [2024-11-05 16:58:51.342922] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:20:02.495 [2024-11-05 16:58:51.343062] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:20:02.495 [2024-11-05 16:58:51.343348] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.495 pt4 00:20:02.495 16:58:51 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:02.495 16:58:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:02.495 16:58:51 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:02.495 16:58:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:02.495 16:58:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:02.495 16:58:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:02.495 16:58:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:02.495 16:58:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:02.495 16:58:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:02.495 16:58:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:02.495 16:58:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:02.495 16:58:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:02.495 16:58:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.495 16:58:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.753 16:58:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:02.753 "name": "raid_bdev1", 00:20:02.753 "uuid": "a8f289dc-1339-4903-abe0-8629f3a35716", 00:20:02.753 "strip_size_kb": 0, 00:20:02.753 "state": "online", 00:20:02.753 "raid_level": "raid1", 00:20:02.753 "superblock": true, 00:20:02.753 "num_base_bdevs": 4, 00:20:02.753 "num_base_bdevs_discovered": 4, 00:20:02.753 "num_base_bdevs_operational": 4, 00:20:02.753 "base_bdevs_list": [ 00:20:02.753 { 00:20:02.753 "name": "pt1", 00:20:02.753 "uuid": "b41c7e5c-6074-54e7-b011-b6c9fca22a22", 00:20:02.753 "is_configured": true, 00:20:02.753 "data_offset": 2048, 00:20:02.753 "data_size": 63488 00:20:02.753 }, 00:20:02.753 { 00:20:02.753 "name": "pt2", 00:20:02.753 "uuid": "d99490d6-a997-5b26-9cf4-9cc3b7cbd9f0", 00:20:02.753 "is_configured": true, 00:20:02.753 "data_offset": 2048, 00:20:02.753 "data_size": 63488 00:20:02.753 }, 00:20:02.753 { 00:20:02.753 "name": "pt3", 00:20:02.753 "uuid": "a0001ced-ecdc-55e4-926c-0af3e2a29bec", 00:20:02.753 "is_configured": true, 00:20:02.753 "data_offset": 2048, 00:20:02.753 "data_size": 63488 00:20:02.753 }, 00:20:02.753 { 00:20:02.753 "name": "pt4", 00:20:02.753 "uuid": "b7520a16-6552-51fd-87ba-1ef2f88d1980", 00:20:02.753 "is_configured": true, 00:20:02.753 "data_offset": 2048, 00:20:02.753 "data_size": 63488 00:20:02.753 } 00:20:02.753 ] 00:20:02.753 }' 00:20:02.753 16:58:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:02.753 16:58:51 -- common/autotest_common.sh@10 -- # set +x 00:20:03.687 16:58:52 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:03.687 16:58:52 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:20:03.687 [2024-11-05 16:58:52.492137] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.687 16:58:52 -- bdev/bdev_raid.sh@430 -- # '[' a8f289dc-1339-4903-abe0-8629f3a35716 '!=' a8f289dc-1339-4903-abe0-8629f3a35716 ']' 00:20:03.687 16:58:52 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:20:03.687 16:58:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:03.687 16:58:52 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:03.687 16:58:52 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:03.945 [2024-11-05 16:58:52.743883] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:03.945 16:58:52 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:03.945 16:58:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:03.945 16:58:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:03.945 16:58:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:03.945 16:58:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:03.945 16:58:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:03.945 16:58:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:03.945 16:58:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:03.945 16:58:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:03.945 16:58:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:03.945 16:58:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.945 16:58:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.203 16:58:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:04.203 "name": "raid_bdev1", 00:20:04.203 "uuid": "a8f289dc-1339-4903-abe0-8629f3a35716", 00:20:04.203 "strip_size_kb": 0, 00:20:04.203 "state": "online", 00:20:04.203 "raid_level": "raid1", 00:20:04.203 "superblock": true, 00:20:04.203 "num_base_bdevs": 4, 00:20:04.203 "num_base_bdevs_discovered": 3, 00:20:04.203 "num_base_bdevs_operational": 3, 00:20:04.203 "base_bdevs_list": [ 00:20:04.203 { 00:20:04.203 "name": null, 00:20:04.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.203 "is_configured": false, 00:20:04.203 "data_offset": 2048, 00:20:04.203 "data_size": 63488 00:20:04.203 }, 00:20:04.203 { 00:20:04.203 "name": "pt2", 00:20:04.203 "uuid": "d99490d6-a997-5b26-9cf4-9cc3b7cbd9f0", 00:20:04.203 "is_configured": true, 00:20:04.203 "data_offset": 2048, 00:20:04.203 "data_size": 63488 00:20:04.203 }, 00:20:04.203 { 00:20:04.203 "name": "pt3", 00:20:04.203 "uuid": "a0001ced-ecdc-55e4-926c-0af3e2a29bec", 00:20:04.203 "is_configured": true, 00:20:04.203 "data_offset": 2048, 00:20:04.203 "data_size": 63488 00:20:04.203 }, 00:20:04.203 { 00:20:04.203 "name": "pt4", 00:20:04.203 "uuid": "b7520a16-6552-51fd-87ba-1ef2f88d1980", 00:20:04.203 "is_configured": true, 00:20:04.203 "data_offset": 2048, 00:20:04.203 "data_size": 63488 00:20:04.203 } 00:20:04.203 ] 00:20:04.203 }' 00:20:04.203 16:58:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:04.203 16:58:52 -- common/autotest_common.sh@10 -- # set +x 00:20:04.770 16:58:53 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:05.028 [2024-11-05 16:58:53.900027] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.028 [2024-11-05 16:58:53.900231] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:05.028 [2024-11-05 16:58:53.900409] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.028 [2024-11-05 16:58:53.900630] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.028 [2024-11-05 16:58:53.900750] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:20:05.028 16:58:53 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.028 16:58:53 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:20:05.287 16:58:54 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:20:05.287 16:58:54 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:20:05.287 16:58:54 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:20:05.287 16:58:54 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:05.287 16:58:54 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:05.546 16:58:54 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:05.546 16:58:54 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:05.546 16:58:54 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:05.804 16:58:54 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:05.804 16:58:54 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:05.804 16:58:54 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:06.063 16:58:54 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:06.063 16:58:54 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:06.063 16:58:54 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:20:06.063 16:58:54 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:06.063 16:58:54 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:06.321 [2024-11-05 16:58:55.048148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:06.321 [2024-11-05 16:58:55.048868] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.321 [2024-11-05 16:58:55.049206] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:06.321 [2024-11-05 16:58:55.049521] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.321 [2024-11-05 16:58:55.052397] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.321 [2024-11-05 16:58:55.052750] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:06.321 [2024-11-05 16:58:55.053114] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:06.321 [2024-11-05 16:58:55.053320] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:06.321 pt2 00:20:06.321 16:58:55 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:06.322 16:58:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:06.322 16:58:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:06.322 16:58:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:06.322 16:58:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:06.322 16:58:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:06.322 16:58:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:06.322 16:58:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:06.322 16:58:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:06.322 16:58:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:06.322 16:58:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.322 16:58:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.580 16:58:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:06.580 "name": "raid_bdev1", 00:20:06.580 "uuid": "a8f289dc-1339-4903-abe0-8629f3a35716", 00:20:06.580 "strip_size_kb": 0, 00:20:06.580 "state": "configuring", 00:20:06.580 "raid_level": "raid1", 00:20:06.580 "superblock": true, 00:20:06.580 "num_base_bdevs": 4, 00:20:06.580 "num_base_bdevs_discovered": 1, 00:20:06.580 "num_base_bdevs_operational": 3, 00:20:06.580 "base_bdevs_list": [ 00:20:06.580 { 00:20:06.580 "name": null, 00:20:06.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.580 "is_configured": false, 00:20:06.580 "data_offset": 2048, 00:20:06.580 "data_size": 63488 00:20:06.580 }, 00:20:06.580 { 00:20:06.580 "name": "pt2", 00:20:06.580 "uuid": "d99490d6-a997-5b26-9cf4-9cc3b7cbd9f0", 00:20:06.580 "is_configured": true, 00:20:06.580 "data_offset": 2048, 00:20:06.580 "data_size": 63488 00:20:06.580 }, 00:20:06.580 { 00:20:06.580 "name": null, 00:20:06.580 "uuid": "a0001ced-ecdc-55e4-926c-0af3e2a29bec", 00:20:06.581 "is_configured": false, 00:20:06.581 "data_offset": 2048, 00:20:06.581 "data_size": 63488 00:20:06.581 }, 00:20:06.581 { 00:20:06.581 "name": null, 00:20:06.581 "uuid": "b7520a16-6552-51fd-87ba-1ef2f88d1980", 00:20:06.581 "is_configured": false, 00:20:06.581 "data_offset": 2048, 00:20:06.581 "data_size": 63488 00:20:06.581 } 00:20:06.581 ] 00:20:06.581 }' 00:20:06.581 16:58:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:06.581 16:58:55 -- common/autotest_common.sh@10 -- # set +x 00:20:07.148 16:58:55 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:20:07.148 16:58:55 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:07.148 16:58:55 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:07.407 [2024-11-05 16:58:56.169363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:07.407 [2024-11-05 16:58:56.170139] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.407 [2024-11-05 16:58:56.170452] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:07.407 [2024-11-05 16:58:56.170751] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.407 [2024-11-05 16:58:56.171590] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.407 [2024-11-05 16:58:56.171880] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:07.407 [2024-11-05 16:58:56.172217] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:07.407 [2024-11-05 16:58:56.172389] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:07.407 pt3 00:20:07.407 16:58:56 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:07.407 16:58:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:07.407 16:58:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:07.407 16:58:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:07.407 16:58:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:07.407 16:58:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:07.407 16:58:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:07.407 16:58:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:07.407 16:58:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:07.407 16:58:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:07.407 16:58:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.407 16:58:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.667 16:58:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:07.667 "name": "raid_bdev1", 00:20:07.667 "uuid": "a8f289dc-1339-4903-abe0-8629f3a35716", 00:20:07.667 "strip_size_kb": 0, 00:20:07.667 "state": "configuring", 00:20:07.667 "raid_level": "raid1", 00:20:07.667 "superblock": true, 00:20:07.667 "num_base_bdevs": 4, 00:20:07.667 "num_base_bdevs_discovered": 2, 00:20:07.667 "num_base_bdevs_operational": 3, 00:20:07.667 "base_bdevs_list": [ 00:20:07.667 { 00:20:07.667 "name": null, 00:20:07.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.667 "is_configured": false, 00:20:07.667 "data_offset": 2048, 00:20:07.667 "data_size": 63488 00:20:07.667 }, 00:20:07.667 { 00:20:07.667 "name": "pt2", 00:20:07.667 "uuid": "d99490d6-a997-5b26-9cf4-9cc3b7cbd9f0", 00:20:07.667 "is_configured": true, 00:20:07.667 "data_offset": 2048, 00:20:07.667 "data_size": 63488 00:20:07.667 }, 00:20:07.667 { 00:20:07.667 "name": "pt3", 00:20:07.667 "uuid": "a0001ced-ecdc-55e4-926c-0af3e2a29bec", 00:20:07.667 "is_configured": true, 00:20:07.667 "data_offset": 2048, 00:20:07.667 "data_size": 63488 00:20:07.667 }, 00:20:07.667 { 00:20:07.667 "name": null, 00:20:07.667 "uuid": "b7520a16-6552-51fd-87ba-1ef2f88d1980", 00:20:07.667 "is_configured": false, 00:20:07.667 "data_offset": 2048, 00:20:07.667 "data_size": 63488 00:20:07.667 } 00:20:07.667 ] 00:20:07.667 }' 00:20:07.667 16:58:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:07.667 16:58:56 -- common/autotest_common.sh@10 -- # set +x 00:20:08.235 16:58:57 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:20:08.235 16:58:57 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:08.235 16:58:57 -- bdev/bdev_raid.sh@462 -- # i=3 00:20:08.235 16:58:57 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:08.493 [2024-11-05 16:58:57.295704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:08.493 [2024-11-05 16:58:57.296324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.493 [2024-11-05 16:58:57.296671] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:08.493 [2024-11-05 16:58:57.296936] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.493 [2024-11-05 16:58:57.297691] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.493 [2024-11-05 16:58:57.297982] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:08.493 [2024-11-05 16:58:57.298326] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:08.493 [2024-11-05 16:58:57.298520] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:08.493 [2024-11-05 16:58:57.298777] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80 00:20:08.493 [2024-11-05 16:58:57.298937] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:08.493 [2024-11-05 16:58:57.299231] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:20:08.493 [2024-11-05 16:58:57.299755] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80 00:20:08.493 [2024-11-05 16:58:57.299893] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80 00:20:08.493 [2024-11-05 16:58:57.300192] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.493 pt4 00:20:08.493 16:58:57 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:08.493 16:58:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:08.493 16:58:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:08.493 16:58:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:08.493 16:58:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:08.493 16:58:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:08.493 16:58:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:08.493 16:58:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:08.493 16:58:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:08.493 16:58:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:08.493 16:58:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.493 16:58:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.752 16:58:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:08.752 "name": "raid_bdev1", 00:20:08.752 "uuid": "a8f289dc-1339-4903-abe0-8629f3a35716", 00:20:08.752 "strip_size_kb": 0, 00:20:08.752 "state": "online", 00:20:08.752 "raid_level": "raid1", 00:20:08.752 "superblock": true, 00:20:08.753 "num_base_bdevs": 4, 00:20:08.753 "num_base_bdevs_discovered": 3, 00:20:08.753 "num_base_bdevs_operational": 3, 00:20:08.753 "base_bdevs_list": [ 00:20:08.753 { 00:20:08.753 "name": null, 00:20:08.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.753 "is_configured": false, 00:20:08.753 "data_offset": 2048, 00:20:08.753 "data_size": 63488 00:20:08.753 }, 00:20:08.753 { 00:20:08.753 "name": "pt2", 00:20:08.753 "uuid": "d99490d6-a997-5b26-9cf4-9cc3b7cbd9f0", 00:20:08.753 "is_configured": true, 00:20:08.753 "data_offset": 2048, 00:20:08.753 "data_size": 63488 00:20:08.753 }, 00:20:08.753 { 00:20:08.753 "name": "pt3", 00:20:08.753 "uuid": "a0001ced-ecdc-55e4-926c-0af3e2a29bec", 00:20:08.753 "is_configured": true, 00:20:08.753 "data_offset": 2048, 00:20:08.753 "data_size": 63488 00:20:08.753 }, 00:20:08.753 { 00:20:08.753 "name": "pt4", 00:20:08.753 "uuid": "b7520a16-6552-51fd-87ba-1ef2f88d1980", 00:20:08.753 "is_configured": true, 00:20:08.753 "data_offset": 2048, 00:20:08.753 "data_size": 63488 00:20:08.753 } 00:20:08.753 ] 00:20:08.753 }' 00:20:08.753 16:58:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:08.753 16:58:57 -- common/autotest_common.sh@10 -- # set +x 00:20:09.322 16:58:58 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:20:09.322 16:58:58 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:09.581 [2024-11-05 16:58:58.428313] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:09.581 [2024-11-05 16:58:58.428487] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:09.581 [2024-11-05 16:58:58.428730] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.581 [2024-11-05 16:58:58.428911] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:09.581 [2024-11-05 16:58:58.429054] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline 00:20:09.581 16:58:58 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.581 16:58:58 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:20:09.840 16:58:58 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:20:09.840 16:58:58 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:20:09.840 16:58:58 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:10.098 [2024-11-05 16:58:58.940320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:10.098 [2024-11-05 16:58:58.940912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.098 [2024-11-05 16:58:58.941273] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:20:10.098 [2024-11-05 16:58:58.941530] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.098 [2024-11-05 16:58:58.944069] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.098 [2024-11-05 16:58:58.944381] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:10.098 [2024-11-05 16:58:58.944726] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:10.098 [2024-11-05 16:58:58.944925] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:10.098 pt1 00:20:10.098 16:58:58 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:10.098 16:58:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:10.098 16:58:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:10.098 16:58:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:10.098 16:58:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:10.098 16:58:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:10.098 16:58:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:10.098 16:58:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:10.098 16:58:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:10.098 16:58:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:10.098 16:58:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.098 16:58:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.357 16:58:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:10.357 "name": "raid_bdev1", 00:20:10.357 "uuid": "a8f289dc-1339-4903-abe0-8629f3a35716", 00:20:10.357 "strip_size_kb": 0, 00:20:10.357 "state": "configuring", 00:20:10.357 "raid_level": "raid1", 00:20:10.357 "superblock": true, 00:20:10.357 "num_base_bdevs": 4, 00:20:10.357 "num_base_bdevs_discovered": 1, 00:20:10.357 "num_base_bdevs_operational": 4, 00:20:10.357 "base_bdevs_list": [ 00:20:10.357 { 00:20:10.357 "name": "pt1", 00:20:10.357 "uuid": "b41c7e5c-6074-54e7-b011-b6c9fca22a22", 00:20:10.357 "is_configured": true, 00:20:10.357 "data_offset": 2048, 00:20:10.357 "data_size": 63488 00:20:10.357 }, 00:20:10.357 { 00:20:10.357 "name": null, 00:20:10.357 "uuid": "d99490d6-a997-5b26-9cf4-9cc3b7cbd9f0", 00:20:10.357 "is_configured": false, 00:20:10.357 "data_offset": 2048, 00:20:10.357 "data_size": 63488 00:20:10.357 }, 00:20:10.357 { 00:20:10.357 "name": null, 00:20:10.357 "uuid": "a0001ced-ecdc-55e4-926c-0af3e2a29bec", 00:20:10.357 "is_configured": false, 00:20:10.357 "data_offset": 2048, 00:20:10.357 "data_size": 63488 00:20:10.357 }, 00:20:10.357 { 00:20:10.357 "name": null, 00:20:10.357 "uuid": "b7520a16-6552-51fd-87ba-1ef2f88d1980", 00:20:10.357 "is_configured": false, 00:20:10.357 "data_offset": 2048, 00:20:10.357 "data_size": 63488 00:20:10.357 } 00:20:10.357 ] 00:20:10.357 }' 00:20:10.357 16:58:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:10.357 16:58:59 -- common/autotest_common.sh@10 -- # set +x 00:20:10.925 16:58:59 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:20:10.925 16:58:59 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:10.925 16:58:59 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:11.183 16:58:59 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:11.183 16:58:59 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:11.183 16:58:59 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:11.441 16:59:00 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:11.441 16:59:00 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:11.441 16:59:00 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:11.698 16:59:00 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:11.698 16:59:00 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:11.698 16:59:00 -- bdev/bdev_raid.sh@489 -- # i=3 00:20:11.699 16:59:00 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:11.699 [2024-11-05 16:59:00.593049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:11.699 [2024-11-05 16:59:00.593770] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.699 [2024-11-05 16:59:00.594085] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:20:11.699 [2024-11-05 16:59:00.594361] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.699 [2024-11-05 16:59:00.595144] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.699 [2024-11-05 16:59:00.595496] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:11.699 [2024-11-05 16:59:00.595833] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:11.699 [2024-11-05 16:59:00.595993] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:11.699 [2024-11-05 16:59:00.596095] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:11.699 [2024-11-05 16:59:00.596149] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c980 name raid_bdev1, state configuring 00:20:11.699 [2024-11-05 16:59:00.596319] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:11.957 pt4 00:20:11.957 16:59:00 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:11.957 16:59:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:11.957 16:59:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:11.957 16:59:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:11.957 16:59:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:11.957 16:59:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:11.957 16:59:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:11.957 16:59:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:11.957 16:59:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:11.957 16:59:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:11.957 16:59:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.957 16:59:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.215 16:59:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:12.215 "name": "raid_bdev1", 00:20:12.215 "uuid": "a8f289dc-1339-4903-abe0-8629f3a35716", 00:20:12.215 "strip_size_kb": 0, 00:20:12.215 "state": "configuring", 00:20:12.215 "raid_level": "raid1", 00:20:12.215 "superblock": true, 00:20:12.215 "num_base_bdevs": 4, 00:20:12.215 "num_base_bdevs_discovered": 1, 00:20:12.215 "num_base_bdevs_operational": 3, 00:20:12.215 "base_bdevs_list": [ 00:20:12.215 { 00:20:12.215 "name": null, 00:20:12.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.215 "is_configured": false, 00:20:12.215 "data_offset": 2048, 00:20:12.215 "data_size": 63488 00:20:12.215 }, 00:20:12.215 { 00:20:12.215 "name": null, 00:20:12.215 "uuid": "d99490d6-a997-5b26-9cf4-9cc3b7cbd9f0", 00:20:12.215 "is_configured": false, 00:20:12.215 "data_offset": 2048, 00:20:12.215 "data_size": 63488 00:20:12.215 }, 00:20:12.215 { 00:20:12.215 "name": null, 00:20:12.215 "uuid": "a0001ced-ecdc-55e4-926c-0af3e2a29bec", 00:20:12.215 "is_configured": false, 00:20:12.215 "data_offset": 2048, 00:20:12.215 "data_size": 63488 00:20:12.215 }, 00:20:12.216 { 00:20:12.216 "name": "pt4", 00:20:12.216 "uuid": "b7520a16-6552-51fd-87ba-1ef2f88d1980", 00:20:12.216 "is_configured": true, 00:20:12.216 "data_offset": 2048, 00:20:12.216 "data_size": 63488 00:20:12.216 } 00:20:12.216 ] 00:20:12.216 }' 00:20:12.216 16:59:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:12.216 16:59:00 -- common/autotest_common.sh@10 -- # set +x 00:20:12.783 16:59:01 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:20:12.783 16:59:01 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:12.783 16:59:01 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:12.783 [2024-11-05 16:59:01.625225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:12.783 [2024-11-05 16:59:01.625821] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:12.783 [2024-11-05 16:59:01.626161] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d280 00:20:12.783 [2024-11-05 16:59:01.626431] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:12.783 [2024-11-05 16:59:01.627196] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:12.783 [2024-11-05 16:59:01.627506] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:12.783 [2024-11-05 16:59:01.627839] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:12.783 [2024-11-05 16:59:01.628012] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:12.783 pt2 00:20:12.783 16:59:01 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:12.783 16:59:01 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:12.783 16:59:01 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:13.041 [2024-11-05 16:59:01.821260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:13.041 [2024-11-05 16:59:01.821505] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.041 [2024-11-05 16:59:01.821579] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:20:13.041 [2024-11-05 16:59:01.821830] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.041 [2024-11-05 16:59:01.822449] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.042 [2024-11-05 16:59:01.822662] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:13.042 [2024-11-05 16:59:01.822914] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:13.042 [2024-11-05 16:59:01.823049] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:13.042 [2024-11-05 16:59:01.823232] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cf80 00:20:13.042 [2024-11-05 16:59:01.823382] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:13.042 [2024-11-05 16:59:01.823532] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:20:13.042 [2024-11-05 16:59:01.823973] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cf80 00:20:13.042 [2024-11-05 16:59:01.824101] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cf80 00:20:13.042 [2024-11-05 16:59:01.824318] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.042 pt3 00:20:13.042 16:59:01 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:13.042 16:59:01 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:13.042 16:59:01 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:13.042 16:59:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:13.042 16:59:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:13.042 16:59:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:13.042 16:59:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:13.042 16:59:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:13.042 16:59:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:13.042 16:59:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:13.042 16:59:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:13.042 16:59:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:13.042 16:59:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.042 16:59:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.300 16:59:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:13.300 "name": "raid_bdev1", 00:20:13.300 "uuid": "a8f289dc-1339-4903-abe0-8629f3a35716", 00:20:13.300 "strip_size_kb": 0, 00:20:13.300 "state": "online", 00:20:13.300 "raid_level": "raid1", 00:20:13.300 "superblock": true, 00:20:13.300 "num_base_bdevs": 4, 00:20:13.300 "num_base_bdevs_discovered": 3, 00:20:13.300 "num_base_bdevs_operational": 3, 00:20:13.300 "base_bdevs_list": [ 00:20:13.300 { 00:20:13.300 "name": null, 00:20:13.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.300 "is_configured": false, 00:20:13.300 "data_offset": 2048, 00:20:13.300 "data_size": 63488 00:20:13.300 }, 00:20:13.300 { 00:20:13.300 "name": "pt2", 00:20:13.300 "uuid": "d99490d6-a997-5b26-9cf4-9cc3b7cbd9f0", 00:20:13.300 "is_configured": true, 00:20:13.300 "data_offset": 2048, 00:20:13.300 "data_size": 63488 00:20:13.300 }, 00:20:13.300 { 00:20:13.300 "name": "pt3", 00:20:13.300 "uuid": "a0001ced-ecdc-55e4-926c-0af3e2a29bec", 00:20:13.300 "is_configured": true, 00:20:13.300 "data_offset": 2048, 00:20:13.300 "data_size": 63488 00:20:13.300 }, 00:20:13.300 { 00:20:13.300 "name": "pt4", 00:20:13.300 "uuid": "b7520a16-6552-51fd-87ba-1ef2f88d1980", 00:20:13.300 "is_configured": true, 00:20:13.300 "data_offset": 2048, 00:20:13.300 "data_size": 63488 00:20:13.300 } 00:20:13.300 ] 00:20:13.300 }' 00:20:13.300 16:59:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:13.301 16:59:02 -- common/autotest_common.sh@10 -- # set +x 00:20:13.867 16:59:02 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:13.867 16:59:02 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:20:14.126 [2024-11-05 16:59:02.909641] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:14.126 16:59:02 -- bdev/bdev_raid.sh@506 -- # '[' a8f289dc-1339-4903-abe0-8629f3a35716 '!=' a8f289dc-1339-4903-abe0-8629f3a35716 ']' 00:20:14.126 16:59:02 -- bdev/bdev_raid.sh@511 -- # killprocess 121744 00:20:14.126 16:59:02 -- common/autotest_common.sh@936 -- # '[' -z 121744 ']' 00:20:14.126 16:59:02 -- common/autotest_common.sh@940 -- # kill -0 121744 00:20:14.126 16:59:02 -- common/autotest_common.sh@941 -- # uname 00:20:14.126 16:59:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:14.126 16:59:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121744 00:20:14.126 killing process with pid 121744 00:20:14.126 16:59:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:14.126 16:59:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:14.126 16:59:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121744' 00:20:14.126 16:59:02 -- common/autotest_common.sh@955 -- # kill 121744 00:20:14.126 16:59:02 -- common/autotest_common.sh@960 -- # wait 121744 00:20:14.126 [2024-11-05 16:59:02.942001] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:14.126 [2024-11-05 16:59:02.942071] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:14.126 [2024-11-05 16:59:02.942191] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:14.126 [2024-11-05 16:59:02.942242] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state offline 00:20:14.384 [2024-11-05 16:59:03.215995] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:15.317 16:59:04 -- bdev/bdev_raid.sh@513 -- # return 0 00:20:15.317 00:20:15.317 real 0m21.714s 00:20:15.317 user 0m40.068s 00:20:15.317 sys 0m2.311s 00:20:15.317 ************************************ 00:20:15.317 END TEST raid_superblock_test 00:20:15.317 ************************************ 00:20:15.317 16:59:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:15.317 16:59:04 -- common/autotest_common.sh@10 -- # set +x 00:20:15.317 16:59:04 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:20:15.317 16:59:04 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:20:15.317 16:59:04 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:20:15.317 16:59:04 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:20:15.317 16:59:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:15.317 16:59:04 -- common/autotest_common.sh@10 -- # set +x 00:20:15.576 ************************************ 00:20:15.576 START TEST raid_rebuild_test 00:20:15.576 ************************************ 00:20:15.576 16:59:04 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false false 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@544 -- # raid_pid=122418 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@545 -- # waitforlisten 122418 /var/tmp/spdk-raid.sock 00:20:15.576 16:59:04 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:15.576 16:59:04 -- common/autotest_common.sh@829 -- # '[' -z 122418 ']' 00:20:15.576 16:59:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:15.576 16:59:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.576 16:59:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:15.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:15.576 16:59:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.576 16:59:04 -- common/autotest_common.sh@10 -- # set +x 00:20:15.576 [2024-11-05 16:59:04.296327] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:15.576 [2024-11-05 16:59:04.296719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122418 ] 00:20:15.576 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:15.576 Zero copy mechanism will not be used. 00:20:15.576 [2024-11-05 16:59:04.467702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.834 [2024-11-05 16:59:04.696599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.092 [2024-11-05 16:59:04.876921] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:16.349 16:59:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:16.349 16:59:05 -- common/autotest_common.sh@862 -- # return 0 00:20:16.349 16:59:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:16.349 16:59:05 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:16.349 16:59:05 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:16.607 BaseBdev1 00:20:16.882 16:59:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:16.882 16:59:05 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:16.882 16:59:05 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:16.882 BaseBdev2 00:20:17.161 16:59:05 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:17.161 spare_malloc 00:20:17.161 16:59:06 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:17.419 spare_delay 00:20:17.419 16:59:06 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:17.678 [2024-11-05 16:59:06.470731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:17.678 [2024-11-05 16:59:06.471560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.678 [2024-11-05 16:59:06.471868] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:20:17.678 [2024-11-05 16:59:06.472153] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.678 [2024-11-05 16:59:06.474846] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.678 [2024-11-05 16:59:06.475164] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:17.678 spare 00:20:17.678 16:59:06 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:17.937 [2024-11-05 16:59:06.691686] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:17.937 [2024-11-05 16:59:06.693751] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.937 [2024-11-05 16:59:06.693970] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:20:17.937 [2024-11-05 16:59:06.694086] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:17.937 [2024-11-05 16:59:06.694408] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:20:17.937 [2024-11-05 16:59:06.694998] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:20:17.937 [2024-11-05 16:59:06.695121] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:20:17.937 [2024-11-05 16:59:06.695431] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.937 16:59:06 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:17.937 16:59:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:17.937 16:59:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:17.937 16:59:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:17.937 16:59:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:17.937 16:59:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:17.937 16:59:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:17.937 16:59:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:17.937 16:59:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:17.937 16:59:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:17.937 16:59:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.937 16:59:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.195 16:59:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:18.195 "name": "raid_bdev1", 00:20:18.195 "uuid": "5612d03a-51aa-4b23-b9aa-a7a343eb930b", 00:20:18.195 "strip_size_kb": 0, 00:20:18.195 "state": "online", 00:20:18.195 "raid_level": "raid1", 00:20:18.195 "superblock": false, 00:20:18.195 "num_base_bdevs": 2, 00:20:18.195 "num_base_bdevs_discovered": 2, 00:20:18.195 "num_base_bdevs_operational": 2, 00:20:18.195 "base_bdevs_list": [ 00:20:18.195 { 00:20:18.195 "name": "BaseBdev1", 00:20:18.195 "uuid": "4afa7d72-a0a9-4e38-9059-0c4df18ce898", 00:20:18.195 "is_configured": true, 00:20:18.195 "data_offset": 0, 00:20:18.195 "data_size": 65536 00:20:18.195 }, 00:20:18.195 { 00:20:18.195 "name": "BaseBdev2", 00:20:18.195 "uuid": "d2e68f93-cc71-408d-b5c1-b021e2efc930", 00:20:18.195 "is_configured": true, 00:20:18.195 "data_offset": 0, 00:20:18.195 "data_size": 65536 00:20:18.195 } 00:20:18.195 ] 00:20:18.195 }' 00:20:18.195 16:59:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:18.195 16:59:06 -- common/autotest_common.sh@10 -- # set +x 00:20:18.762 16:59:07 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:18.762 16:59:07 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:19.021 [2024-11-05 16:59:07.712091] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:19.021 16:59:07 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:19.021 16:59:07 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.021 16:59:07 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:19.280 16:59:07 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:19.280 16:59:07 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:19.280 16:59:07 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:19.280 16:59:07 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:19.280 16:59:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:19.280 16:59:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:19.280 16:59:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:19.280 16:59:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:19.280 16:59:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:19.280 16:59:07 -- bdev/nbd_common.sh@12 -- # local i 00:20:19.280 16:59:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:19.280 16:59:07 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:19.280 16:59:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:19.539 [2024-11-05 16:59:08.229180] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:19.539 /dev/nbd0 00:20:19.539 16:59:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:19.539 16:59:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:19.539 16:59:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:19.539 16:59:08 -- common/autotest_common.sh@867 -- # local i 00:20:19.539 16:59:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:19.539 16:59:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:19.539 16:59:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:19.539 16:59:08 -- common/autotest_common.sh@871 -- # break 00:20:19.539 16:59:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:19.539 16:59:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:19.539 16:59:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:19.539 1+0 records in 00:20:19.539 1+0 records out 00:20:19.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561797 s, 7.3 MB/s 00:20:19.539 16:59:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.539 16:59:08 -- common/autotest_common.sh@884 -- # size=4096 00:20:19.539 16:59:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.539 16:59:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:19.539 16:59:08 -- common/autotest_common.sh@887 -- # return 0 00:20:19.539 16:59:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:19.539 16:59:08 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:19.539 16:59:08 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:19.539 16:59:08 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:19.539 16:59:08 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:24.806 65536+0 records in 00:20:24.806 65536+0 records out 00:20:24.806 33554432 bytes (34 MB, 32 MiB) copied, 4.75024 s, 7.1 MB/s 00:20:24.806 16:59:13 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:24.806 16:59:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:24.806 16:59:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:24.806 16:59:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:24.806 16:59:13 -- bdev/nbd_common.sh@51 -- # local i 00:20:24.806 16:59:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:24.806 16:59:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:24.806 [2024-11-05 16:59:13.284251] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.806 16:59:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:24.806 16:59:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:24.806 16:59:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:24.806 16:59:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:24.806 16:59:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.806 16:59:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:24.806 16:59:13 -- bdev/nbd_common.sh@41 -- # break 00:20:24.806 16:59:13 -- bdev/nbd_common.sh@45 -- # return 0 00:20:24.806 16:59:13 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:24.806 [2024-11-05 16:59:13.543502] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:24.806 16:59:13 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:24.806 16:59:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:24.806 16:59:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:24.806 16:59:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:24.806 16:59:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:24.806 16:59:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:24.806 16:59:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:24.806 16:59:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:24.806 16:59:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:24.806 16:59:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:24.806 16:59:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.806 16:59:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.065 16:59:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:25.065 "name": "raid_bdev1", 00:20:25.065 "uuid": "5612d03a-51aa-4b23-b9aa-a7a343eb930b", 00:20:25.065 "strip_size_kb": 0, 00:20:25.065 "state": "online", 00:20:25.065 "raid_level": "raid1", 00:20:25.065 "superblock": false, 00:20:25.065 "num_base_bdevs": 2, 00:20:25.065 "num_base_bdevs_discovered": 1, 00:20:25.065 "num_base_bdevs_operational": 1, 00:20:25.065 "base_bdevs_list": [ 00:20:25.065 { 00:20:25.065 "name": null, 00:20:25.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.065 "is_configured": false, 00:20:25.065 "data_offset": 0, 00:20:25.065 "data_size": 65536 00:20:25.065 }, 00:20:25.065 { 00:20:25.065 "name": "BaseBdev2", 00:20:25.065 "uuid": "d2e68f93-cc71-408d-b5c1-b021e2efc930", 00:20:25.065 "is_configured": true, 00:20:25.065 "data_offset": 0, 00:20:25.065 "data_size": 65536 00:20:25.065 } 00:20:25.065 ] 00:20:25.065 }' 00:20:25.065 16:59:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:25.065 16:59:13 -- common/autotest_common.sh@10 -- # set +x 00:20:25.633 16:59:14 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:25.891 [2024-11-05 16:59:14.584088] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:25.891 [2024-11-05 16:59:14.584300] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:25.891 [2024-11-05 16:59:14.597490] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09550 00:20:25.891 [2024-11-05 16:59:14.599776] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:25.891 16:59:14 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:26.827 16:59:15 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:26.827 16:59:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:26.827 16:59:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:26.827 16:59:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:26.827 16:59:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:26.827 16:59:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.827 16:59:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.085 16:59:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:27.085 "name": "raid_bdev1", 00:20:27.085 "uuid": "5612d03a-51aa-4b23-b9aa-a7a343eb930b", 00:20:27.085 "strip_size_kb": 0, 00:20:27.085 "state": "online", 00:20:27.085 "raid_level": "raid1", 00:20:27.085 "superblock": false, 00:20:27.085 "num_base_bdevs": 2, 00:20:27.085 "num_base_bdevs_discovered": 2, 00:20:27.085 "num_base_bdevs_operational": 2, 00:20:27.085 "process": { 00:20:27.085 "type": "rebuild", 00:20:27.085 "target": "spare", 00:20:27.085 "progress": { 00:20:27.085 "blocks": 24576, 00:20:27.085 "percent": 37 00:20:27.085 } 00:20:27.085 }, 00:20:27.085 "base_bdevs_list": [ 00:20:27.085 { 00:20:27.085 "name": "spare", 00:20:27.085 "uuid": "d6b1e6e6-f89b-55ce-8b22-253d366d98fa", 00:20:27.085 "is_configured": true, 00:20:27.085 "data_offset": 0, 00:20:27.085 "data_size": 65536 00:20:27.085 }, 00:20:27.085 { 00:20:27.085 "name": "BaseBdev2", 00:20:27.085 "uuid": "d2e68f93-cc71-408d-b5c1-b021e2efc930", 00:20:27.085 "is_configured": true, 00:20:27.085 "data_offset": 0, 00:20:27.085 "data_size": 65536 00:20:27.085 } 00:20:27.085 ] 00:20:27.085 }' 00:20:27.085 16:59:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:27.085 16:59:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:27.085 16:59:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:27.085 16:59:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:27.085 16:59:15 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:27.344 [2024-11-05 16:59:16.177337] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:27.344 [2024-11-05 16:59:16.209140] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:27.344 [2024-11-05 16:59:16.209791] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.602 16:59:16 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:27.602 16:59:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:27.602 16:59:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:27.602 16:59:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:27.602 16:59:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:27.602 16:59:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:27.602 16:59:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:27.602 16:59:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:27.602 16:59:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:27.602 16:59:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:27.602 16:59:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.602 16:59:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.861 16:59:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:27.861 "name": "raid_bdev1", 00:20:27.861 "uuid": "5612d03a-51aa-4b23-b9aa-a7a343eb930b", 00:20:27.861 "strip_size_kb": 0, 00:20:27.861 "state": "online", 00:20:27.861 "raid_level": "raid1", 00:20:27.861 "superblock": false, 00:20:27.861 "num_base_bdevs": 2, 00:20:27.861 "num_base_bdevs_discovered": 1, 00:20:27.861 "num_base_bdevs_operational": 1, 00:20:27.861 "base_bdevs_list": [ 00:20:27.861 { 00:20:27.861 "name": null, 00:20:27.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.861 "is_configured": false, 00:20:27.861 "data_offset": 0, 00:20:27.861 "data_size": 65536 00:20:27.861 }, 00:20:27.861 { 00:20:27.861 "name": "BaseBdev2", 00:20:27.861 "uuid": "d2e68f93-cc71-408d-b5c1-b021e2efc930", 00:20:27.861 "is_configured": true, 00:20:27.861 "data_offset": 0, 00:20:27.861 "data_size": 65536 00:20:27.861 } 00:20:27.861 ] 00:20:27.861 }' 00:20:27.861 16:59:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:27.861 16:59:16 -- common/autotest_common.sh@10 -- # set +x 00:20:28.428 16:59:17 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:28.428 16:59:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:28.428 16:59:17 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:28.428 16:59:17 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:28.428 16:59:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:28.428 16:59:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.428 16:59:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.687 16:59:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:28.687 "name": "raid_bdev1", 00:20:28.687 "uuid": "5612d03a-51aa-4b23-b9aa-a7a343eb930b", 00:20:28.687 "strip_size_kb": 0, 00:20:28.687 "state": "online", 00:20:28.687 "raid_level": "raid1", 00:20:28.687 "superblock": false, 00:20:28.687 "num_base_bdevs": 2, 00:20:28.687 "num_base_bdevs_discovered": 1, 00:20:28.687 "num_base_bdevs_operational": 1, 00:20:28.687 "base_bdevs_list": [ 00:20:28.687 { 00:20:28.687 "name": null, 00:20:28.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.687 "is_configured": false, 00:20:28.687 "data_offset": 0, 00:20:28.687 "data_size": 65536 00:20:28.687 }, 00:20:28.687 { 00:20:28.687 "name": "BaseBdev2", 00:20:28.687 "uuid": "d2e68f93-cc71-408d-b5c1-b021e2efc930", 00:20:28.687 "is_configured": true, 00:20:28.687 "data_offset": 0, 00:20:28.687 "data_size": 65536 00:20:28.687 } 00:20:28.687 ] 00:20:28.687 }' 00:20:28.687 16:59:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:28.687 16:59:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:28.687 16:59:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:28.687 16:59:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:28.687 16:59:17 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:28.946 [2024-11-05 16:59:17.749239] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:28.946 [2024-11-05 16:59:17.749446] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:28.946 [2024-11-05 16:59:17.762107] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:20:28.946 [2024-11-05 16:59:17.764269] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:28.946 16:59:17 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:29.910 16:59:18 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.910 16:59:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:29.910 16:59:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:29.910 16:59:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:29.910 16:59:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:29.910 16:59:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.910 16:59:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.167 16:59:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:30.167 "name": "raid_bdev1", 00:20:30.167 "uuid": "5612d03a-51aa-4b23-b9aa-a7a343eb930b", 00:20:30.167 "strip_size_kb": 0, 00:20:30.167 "state": "online", 00:20:30.167 "raid_level": "raid1", 00:20:30.167 "superblock": false, 00:20:30.167 "num_base_bdevs": 2, 00:20:30.167 "num_base_bdevs_discovered": 2, 00:20:30.167 "num_base_bdevs_operational": 2, 00:20:30.167 "process": { 00:20:30.167 "type": "rebuild", 00:20:30.167 "target": "spare", 00:20:30.167 "progress": { 00:20:30.167 "blocks": 24576, 00:20:30.167 "percent": 37 00:20:30.167 } 00:20:30.167 }, 00:20:30.167 "base_bdevs_list": [ 00:20:30.167 { 00:20:30.167 "name": "spare", 00:20:30.167 "uuid": "d6b1e6e6-f89b-55ce-8b22-253d366d98fa", 00:20:30.167 "is_configured": true, 00:20:30.167 "data_offset": 0, 00:20:30.167 "data_size": 65536 00:20:30.167 }, 00:20:30.167 { 00:20:30.167 "name": "BaseBdev2", 00:20:30.167 "uuid": "d2e68f93-cc71-408d-b5c1-b021e2efc930", 00:20:30.167 "is_configured": true, 00:20:30.167 "data_offset": 0, 00:20:30.167 "data_size": 65536 00:20:30.167 } 00:20:30.167 ] 00:20:30.167 }' 00:20:30.167 16:59:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:30.425 16:59:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.425 16:59:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:30.425 16:59:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.425 16:59:19 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:30.425 16:59:19 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:30.425 16:59:19 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:30.425 16:59:19 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:30.425 16:59:19 -- bdev/bdev_raid.sh@657 -- # local timeout=399 00:20:30.425 16:59:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:30.425 16:59:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.425 16:59:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:30.425 16:59:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:30.425 16:59:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:30.425 16:59:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:30.425 16:59:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.425 16:59:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.683 16:59:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:30.683 "name": "raid_bdev1", 00:20:30.683 "uuid": "5612d03a-51aa-4b23-b9aa-a7a343eb930b", 00:20:30.683 "strip_size_kb": 0, 00:20:30.683 "state": "online", 00:20:30.683 "raid_level": "raid1", 00:20:30.683 "superblock": false, 00:20:30.683 "num_base_bdevs": 2, 00:20:30.683 "num_base_bdevs_discovered": 2, 00:20:30.683 "num_base_bdevs_operational": 2, 00:20:30.683 "process": { 00:20:30.683 "type": "rebuild", 00:20:30.683 "target": "spare", 00:20:30.683 "progress": { 00:20:30.683 "blocks": 30720, 00:20:30.683 "percent": 46 00:20:30.683 } 00:20:30.683 }, 00:20:30.683 "base_bdevs_list": [ 00:20:30.683 { 00:20:30.683 "name": "spare", 00:20:30.683 "uuid": "d6b1e6e6-f89b-55ce-8b22-253d366d98fa", 00:20:30.683 "is_configured": true, 00:20:30.683 "data_offset": 0, 00:20:30.683 "data_size": 65536 00:20:30.683 }, 00:20:30.683 { 00:20:30.683 "name": "BaseBdev2", 00:20:30.683 "uuid": "d2e68f93-cc71-408d-b5c1-b021e2efc930", 00:20:30.683 "is_configured": true, 00:20:30.683 "data_offset": 0, 00:20:30.683 "data_size": 65536 00:20:30.683 } 00:20:30.683 ] 00:20:30.683 }' 00:20:30.683 16:59:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:30.683 16:59:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.683 16:59:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:30.683 16:59:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.683 16:59:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:31.624 16:59:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:31.624 16:59:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.624 16:59:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:31.624 16:59:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:31.624 16:59:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:31.624 16:59:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:31.624 16:59:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.624 16:59:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.882 16:59:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:31.882 "name": "raid_bdev1", 00:20:31.882 "uuid": "5612d03a-51aa-4b23-b9aa-a7a343eb930b", 00:20:31.882 "strip_size_kb": 0, 00:20:31.882 "state": "online", 00:20:31.882 "raid_level": "raid1", 00:20:31.882 "superblock": false, 00:20:31.882 "num_base_bdevs": 2, 00:20:31.882 "num_base_bdevs_discovered": 2, 00:20:31.882 "num_base_bdevs_operational": 2, 00:20:31.882 "process": { 00:20:31.882 "type": "rebuild", 00:20:31.882 "target": "spare", 00:20:31.882 "progress": { 00:20:31.882 "blocks": 59392, 00:20:31.882 "percent": 90 00:20:31.882 } 00:20:31.882 }, 00:20:31.882 "base_bdevs_list": [ 00:20:31.882 { 00:20:31.882 "name": "spare", 00:20:31.882 "uuid": "d6b1e6e6-f89b-55ce-8b22-253d366d98fa", 00:20:31.882 "is_configured": true, 00:20:31.882 "data_offset": 0, 00:20:31.882 "data_size": 65536 00:20:31.882 }, 00:20:31.882 { 00:20:31.882 "name": "BaseBdev2", 00:20:31.883 "uuid": "d2e68f93-cc71-408d-b5c1-b021e2efc930", 00:20:31.883 "is_configured": true, 00:20:31.883 "data_offset": 0, 00:20:31.883 "data_size": 65536 00:20:31.883 } 00:20:31.883 ] 00:20:31.883 }' 00:20:31.883 16:59:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:31.883 16:59:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:31.883 16:59:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:32.141 16:59:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.141 16:59:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:32.141 [2024-11-05 16:59:20.982352] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:32.141 [2024-11-05 16:59:20.982603] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:32.141 [2024-11-05 16:59:20.983245] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.076 16:59:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:33.076 16:59:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:33.076 16:59:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:33.076 16:59:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:33.076 16:59:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:33.076 16:59:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:33.076 16:59:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.076 16:59:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.336 16:59:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:33.336 "name": "raid_bdev1", 00:20:33.336 "uuid": "5612d03a-51aa-4b23-b9aa-a7a343eb930b", 00:20:33.336 "strip_size_kb": 0, 00:20:33.336 "state": "online", 00:20:33.336 "raid_level": "raid1", 00:20:33.336 "superblock": false, 00:20:33.336 "num_base_bdevs": 2, 00:20:33.336 "num_base_bdevs_discovered": 2, 00:20:33.336 "num_base_bdevs_operational": 2, 00:20:33.336 "base_bdevs_list": [ 00:20:33.336 { 00:20:33.336 "name": "spare", 00:20:33.336 "uuid": "d6b1e6e6-f89b-55ce-8b22-253d366d98fa", 00:20:33.336 "is_configured": true, 00:20:33.336 "data_offset": 0, 00:20:33.336 "data_size": 65536 00:20:33.336 }, 00:20:33.336 { 00:20:33.336 "name": "BaseBdev2", 00:20:33.336 "uuid": "d2e68f93-cc71-408d-b5c1-b021e2efc930", 00:20:33.336 "is_configured": true, 00:20:33.336 "data_offset": 0, 00:20:33.336 "data_size": 65536 00:20:33.336 } 00:20:33.336 ] 00:20:33.336 }' 00:20:33.336 16:59:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:33.336 16:59:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:33.336 16:59:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:33.336 16:59:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:33.336 16:59:22 -- bdev/bdev_raid.sh@660 -- # break 00:20:33.336 16:59:22 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:33.336 16:59:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:33.336 16:59:22 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:33.336 16:59:22 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:33.336 16:59:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:33.336 16:59:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.336 16:59:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.598 16:59:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:33.598 "name": "raid_bdev1", 00:20:33.598 "uuid": "5612d03a-51aa-4b23-b9aa-a7a343eb930b", 00:20:33.598 "strip_size_kb": 0, 00:20:33.598 "state": "online", 00:20:33.598 "raid_level": "raid1", 00:20:33.598 "superblock": false, 00:20:33.598 "num_base_bdevs": 2, 00:20:33.598 "num_base_bdevs_discovered": 2, 00:20:33.598 "num_base_bdevs_operational": 2, 00:20:33.598 "base_bdevs_list": [ 00:20:33.598 { 00:20:33.598 "name": "spare", 00:20:33.598 "uuid": "d6b1e6e6-f89b-55ce-8b22-253d366d98fa", 00:20:33.598 "is_configured": true, 00:20:33.598 "data_offset": 0, 00:20:33.598 "data_size": 65536 00:20:33.598 }, 00:20:33.598 { 00:20:33.598 "name": "BaseBdev2", 00:20:33.598 "uuid": "d2e68f93-cc71-408d-b5c1-b021e2efc930", 00:20:33.598 "is_configured": true, 00:20:33.598 "data_offset": 0, 00:20:33.598 "data_size": 65536 00:20:33.598 } 00:20:33.598 ] 00:20:33.598 }' 00:20:33.598 16:59:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:33.598 16:59:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:33.598 16:59:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:33.856 16:59:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:33.856 16:59:22 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:33.856 16:59:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:33.856 16:59:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:33.856 16:59:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:33.856 16:59:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:33.856 16:59:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:33.856 16:59:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:33.856 16:59:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:33.856 16:59:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:33.856 16:59:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:33.856 16:59:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.856 16:59:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.115 16:59:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:34.115 "name": "raid_bdev1", 00:20:34.115 "uuid": "5612d03a-51aa-4b23-b9aa-a7a343eb930b", 00:20:34.115 "strip_size_kb": 0, 00:20:34.115 "state": "online", 00:20:34.115 "raid_level": "raid1", 00:20:34.115 "superblock": false, 00:20:34.115 "num_base_bdevs": 2, 00:20:34.115 "num_base_bdevs_discovered": 2, 00:20:34.115 "num_base_bdevs_operational": 2, 00:20:34.115 "base_bdevs_list": [ 00:20:34.115 { 00:20:34.115 "name": "spare", 00:20:34.115 "uuid": "d6b1e6e6-f89b-55ce-8b22-253d366d98fa", 00:20:34.115 "is_configured": true, 00:20:34.115 "data_offset": 0, 00:20:34.115 "data_size": 65536 00:20:34.115 }, 00:20:34.115 { 00:20:34.115 "name": "BaseBdev2", 00:20:34.115 "uuid": "d2e68f93-cc71-408d-b5c1-b021e2efc930", 00:20:34.115 "is_configured": true, 00:20:34.115 "data_offset": 0, 00:20:34.115 "data_size": 65536 00:20:34.115 } 00:20:34.115 ] 00:20:34.115 }' 00:20:34.115 16:59:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:34.115 16:59:22 -- common/autotest_common.sh@10 -- # set +x 00:20:34.682 16:59:23 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:34.941 [2024-11-05 16:59:23.621478] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:34.941 [2024-11-05 16:59:23.621673] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:34.941 [2024-11-05 16:59:23.621906] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.941 [2024-11-05 16:59:23.622140] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:34.941 [2024-11-05 16:59:23.622270] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:20:34.941 16:59:23 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.941 16:59:23 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:35.199 16:59:23 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:35.199 16:59:23 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:35.199 16:59:23 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:35.199 16:59:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:35.199 16:59:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:35.199 16:59:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:35.199 16:59:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:35.199 16:59:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:35.199 16:59:23 -- bdev/nbd_common.sh@12 -- # local i 00:20:35.199 16:59:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:35.199 16:59:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:35.199 16:59:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:35.457 /dev/nbd0 00:20:35.457 16:59:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:35.457 16:59:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:35.457 16:59:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:35.457 16:59:24 -- common/autotest_common.sh@867 -- # local i 00:20:35.457 16:59:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:35.457 16:59:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:35.457 16:59:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:35.457 16:59:24 -- common/autotest_common.sh@871 -- # break 00:20:35.457 16:59:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:35.457 16:59:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:35.457 16:59:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:35.457 1+0 records in 00:20:35.457 1+0 records out 00:20:35.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487182 s, 8.4 MB/s 00:20:35.457 16:59:24 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:35.457 16:59:24 -- common/autotest_common.sh@884 -- # size=4096 00:20:35.457 16:59:24 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:35.457 16:59:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:35.457 16:59:24 -- common/autotest_common.sh@887 -- # return 0 00:20:35.457 16:59:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:35.457 16:59:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:35.457 16:59:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:35.716 /dev/nbd1 00:20:35.716 16:59:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:35.716 16:59:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:35.716 16:59:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:35.716 16:59:24 -- common/autotest_common.sh@867 -- # local i 00:20:35.716 16:59:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:35.716 16:59:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:35.716 16:59:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:35.716 16:59:24 -- common/autotest_common.sh@871 -- # break 00:20:35.716 16:59:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:35.716 16:59:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:35.716 16:59:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:35.716 1+0 records in 00:20:35.716 1+0 records out 00:20:35.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678443 s, 6.0 MB/s 00:20:35.716 16:59:24 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:35.716 16:59:24 -- common/autotest_common.sh@884 -- # size=4096 00:20:35.716 16:59:24 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:35.716 16:59:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:35.716 16:59:24 -- common/autotest_common.sh@887 -- # return 0 00:20:35.716 16:59:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:35.716 16:59:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:35.716 16:59:24 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:35.975 16:59:24 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:35.975 16:59:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:35.975 16:59:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:35.975 16:59:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:35.975 16:59:24 -- bdev/nbd_common.sh@51 -- # local i 00:20:35.975 16:59:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:35.975 16:59:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:36.233 16:59:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:36.233 16:59:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:36.233 16:59:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:36.233 16:59:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:36.233 16:59:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:36.233 16:59:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:36.233 16:59:24 -- bdev/nbd_common.sh@41 -- # break 00:20:36.233 16:59:24 -- bdev/nbd_common.sh@45 -- # return 0 00:20:36.233 16:59:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:36.233 16:59:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:36.492 16:59:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:36.492 16:59:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:36.492 16:59:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:36.492 16:59:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:36.492 16:59:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:36.492 16:59:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:36.492 16:59:25 -- bdev/nbd_common.sh@41 -- # break 00:20:36.492 16:59:25 -- bdev/nbd_common.sh@45 -- # return 0 00:20:36.492 16:59:25 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:36.492 16:59:25 -- bdev/bdev_raid.sh@709 -- # killprocess 122418 00:20:36.492 16:59:25 -- common/autotest_common.sh@936 -- # '[' -z 122418 ']' 00:20:36.492 16:59:25 -- common/autotest_common.sh@940 -- # kill -0 122418 00:20:36.492 16:59:25 -- common/autotest_common.sh@941 -- # uname 00:20:36.492 16:59:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:36.492 16:59:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122418 00:20:36.492 16:59:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:36.492 16:59:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:36.492 16:59:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122418' 00:20:36.492 killing process with pid 122418 00:20:36.492 16:59:25 -- common/autotest_common.sh@955 -- # kill 122418 00:20:36.492 Received shutdown signal, test time was about 60.000000 seconds 00:20:36.492 00:20:36.492 Latency(us) 00:20:36.492 [2024-11-05T16:59:25.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.492 [2024-11-05T16:59:25.369Z] =================================================================================================================== 00:20:36.492 [2024-11-05T16:59:25.369Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.492 16:59:25 -- common/autotest_common.sh@960 -- # wait 122418 00:20:36.492 [2024-11-05 16:59:25.248588] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:36.750 [2024-11-05 16:59:25.453235] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:37.711 ************************************ 00:20:37.711 END TEST raid_rebuild_test 00:20:37.711 ************************************ 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:37.711 00:20:37.711 real 0m22.283s 00:20:37.711 user 0m30.934s 00:20:37.711 sys 0m3.471s 00:20:37.711 16:59:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:37.711 16:59:26 -- common/autotest_common.sh@10 -- # set +x 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:20:37.711 16:59:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:20:37.711 16:59:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:37.711 16:59:26 -- common/autotest_common.sh@10 -- # set +x 00:20:37.711 ************************************ 00:20:37.711 START TEST raid_rebuild_test_sb 00:20:37.711 ************************************ 00:20:37.711 16:59:26 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true false 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:37.711 16:59:26 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:37.712 16:59:26 -- bdev/bdev_raid.sh@544 -- # raid_pid=122960 00:20:37.712 16:59:26 -- bdev/bdev_raid.sh@545 -- # waitforlisten 122960 /var/tmp/spdk-raid.sock 00:20:37.712 16:59:26 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:37.712 16:59:26 -- common/autotest_common.sh@829 -- # '[' -z 122960 ']' 00:20:37.712 16:59:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:37.712 16:59:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.712 16:59:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:37.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:37.712 16:59:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.712 16:59:26 -- common/autotest_common.sh@10 -- # set +x 00:20:37.970 [2024-11-05 16:59:26.651834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:37.970 [2024-11-05 16:59:26.652267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122960 ] 00:20:37.971 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:37.971 Zero copy mechanism will not be used. 00:20:37.971 [2024-11-05 16:59:26.826200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.229 [2024-11-05 16:59:27.013327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.488 [2024-11-05 16:59:27.201564] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:38.747 16:59:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:38.747 16:59:27 -- common/autotest_common.sh@862 -- # return 0 00:20:38.747 16:59:27 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:38.747 16:59:27 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:38.747 16:59:27 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:39.006 BaseBdev1_malloc 00:20:39.006 16:59:27 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:39.265 [2024-11-05 16:59:28.084675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:39.265 [2024-11-05 16:59:28.085362] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.265 [2024-11-05 16:59:28.085687] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:20:39.265 [2024-11-05 16:59:28.085999] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.265 [2024-11-05 16:59:28.088790] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.265 [2024-11-05 16:59:28.089082] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:39.265 BaseBdev1 00:20:39.265 16:59:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:39.265 16:59:28 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:39.265 16:59:28 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:39.524 BaseBdev2_malloc 00:20:39.524 16:59:28 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:39.782 [2024-11-05 16:59:28.597568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:39.782 [2024-11-05 16:59:28.598045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.782 [2024-11-05 16:59:28.598337] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:39.782 [2024-11-05 16:59:28.598652] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.782 [2024-11-05 16:59:28.601247] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.782 [2024-11-05 16:59:28.601554] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:39.782 BaseBdev2 00:20:39.782 16:59:28 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:40.040 spare_malloc 00:20:40.040 16:59:28 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:40.299 spare_delay 00:20:40.299 16:59:29 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:40.558 [2024-11-05 16:59:29.282288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:40.558 [2024-11-05 16:59:29.282963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.558 [2024-11-05 16:59:29.283252] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:20:40.558 [2024-11-05 16:59:29.283557] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.558 [2024-11-05 16:59:29.286208] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.558 [2024-11-05 16:59:29.286525] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:40.558 spare 00:20:40.558 16:59:29 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:40.817 [2024-11-05 16:59:29.559101] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:40.817 [2024-11-05 16:59:29.561396] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:40.817 [2024-11-05 16:59:29.561824] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:20:40.817 [2024-11-05 16:59:29.561990] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:40.817 [2024-11-05 16:59:29.562205] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:40.817 [2024-11-05 16:59:29.562763] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:20:40.817 [2024-11-05 16:59:29.562963] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:20:40.817 [2024-11-05 16:59:29.563307] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.817 16:59:29 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:40.817 16:59:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:40.818 16:59:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:40.818 16:59:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:40.818 16:59:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:40.818 16:59:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:40.818 16:59:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:40.818 16:59:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:40.818 16:59:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:40.818 16:59:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:40.818 16:59:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.818 16:59:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.076 16:59:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:41.076 "name": "raid_bdev1", 00:20:41.076 "uuid": "4a122392-a74c-4fa3-bd86-585b81d3ef97", 00:20:41.076 "strip_size_kb": 0, 00:20:41.076 "state": "online", 00:20:41.076 "raid_level": "raid1", 00:20:41.076 "superblock": true, 00:20:41.076 "num_base_bdevs": 2, 00:20:41.076 "num_base_bdevs_discovered": 2, 00:20:41.076 "num_base_bdevs_operational": 2, 00:20:41.076 "base_bdevs_list": [ 00:20:41.076 { 00:20:41.076 "name": "BaseBdev1", 00:20:41.076 "uuid": "e9559b68-6742-506f-ae25-668fcf9a8a8a", 00:20:41.076 "is_configured": true, 00:20:41.076 "data_offset": 2048, 00:20:41.076 "data_size": 63488 00:20:41.076 }, 00:20:41.076 { 00:20:41.076 "name": "BaseBdev2", 00:20:41.076 "uuid": "b70b46be-04a4-5f7b-8109-69e5782621bf", 00:20:41.076 "is_configured": true, 00:20:41.076 "data_offset": 2048, 00:20:41.076 "data_size": 63488 00:20:41.076 } 00:20:41.076 ] 00:20:41.076 }' 00:20:41.076 16:59:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:41.076 16:59:29 -- common/autotest_common.sh@10 -- # set +x 00:20:41.644 16:59:30 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:41.644 16:59:30 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:41.902 [2024-11-05 16:59:30.655791] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:41.902 16:59:30 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:41.902 16:59:30 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.902 16:59:30 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:42.161 16:59:30 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:42.161 16:59:30 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:42.161 16:59:30 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:42.161 16:59:30 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:42.161 16:59:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:42.161 16:59:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:42.161 16:59:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:42.161 16:59:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:42.161 16:59:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:42.161 16:59:30 -- bdev/nbd_common.sh@12 -- # local i 00:20:42.161 16:59:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:42.161 16:59:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:42.161 16:59:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:42.420 [2024-11-05 16:59:31.175667] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:42.420 /dev/nbd0 00:20:42.420 16:59:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:42.420 16:59:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:42.420 16:59:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:42.420 16:59:31 -- common/autotest_common.sh@867 -- # local i 00:20:42.420 16:59:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:42.420 16:59:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:42.420 16:59:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:42.420 16:59:31 -- common/autotest_common.sh@871 -- # break 00:20:42.420 16:59:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:42.420 16:59:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:42.420 16:59:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:42.420 1+0 records in 00:20:42.420 1+0 records out 00:20:42.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607438 s, 6.7 MB/s 00:20:42.420 16:59:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.420 16:59:31 -- common/autotest_common.sh@884 -- # size=4096 00:20:42.420 16:59:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.420 16:59:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:42.420 16:59:31 -- common/autotest_common.sh@887 -- # return 0 00:20:42.420 16:59:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:42.420 16:59:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:42.420 16:59:31 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:42.420 16:59:31 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:42.420 16:59:31 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:47.689 63488+0 records in 00:20:47.689 63488+0 records out 00:20:47.689 32505856 bytes (33 MB, 31 MiB) copied, 5.29029 s, 6.1 MB/s 00:20:47.689 16:59:36 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:47.689 16:59:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:47.689 16:59:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:47.689 16:59:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:47.689 16:59:36 -- bdev/nbd_common.sh@51 -- # local i 00:20:47.689 16:59:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:47.689 16:59:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:47.948 16:59:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:47.948 16:59:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:47.948 16:59:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:47.948 16:59:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:47.948 16:59:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:47.948 16:59:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:47.948 [2024-11-05 16:59:36.773448] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.948 16:59:36 -- bdev/nbd_common.sh@41 -- # break 00:20:47.948 16:59:36 -- bdev/nbd_common.sh@45 -- # return 0 00:20:47.948 16:59:36 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:48.206 [2024-11-05 16:59:36.973056] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:48.206 16:59:36 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:48.206 16:59:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:48.206 16:59:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:48.206 16:59:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:48.206 16:59:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:48.206 16:59:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:48.206 16:59:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:48.206 16:59:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:48.206 16:59:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:48.207 16:59:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:48.207 16:59:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.207 16:59:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.465 16:59:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:48.465 "name": "raid_bdev1", 00:20:48.465 "uuid": "4a122392-a74c-4fa3-bd86-585b81d3ef97", 00:20:48.465 "strip_size_kb": 0, 00:20:48.465 "state": "online", 00:20:48.465 "raid_level": "raid1", 00:20:48.465 "superblock": true, 00:20:48.465 "num_base_bdevs": 2, 00:20:48.465 "num_base_bdevs_discovered": 1, 00:20:48.465 "num_base_bdevs_operational": 1, 00:20:48.465 "base_bdevs_list": [ 00:20:48.465 { 00:20:48.465 "name": null, 00:20:48.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.465 "is_configured": false, 00:20:48.465 "data_offset": 2048, 00:20:48.465 "data_size": 63488 00:20:48.465 }, 00:20:48.465 { 00:20:48.465 "name": "BaseBdev2", 00:20:48.465 "uuid": "b70b46be-04a4-5f7b-8109-69e5782621bf", 00:20:48.465 "is_configured": true, 00:20:48.465 "data_offset": 2048, 00:20:48.465 "data_size": 63488 00:20:48.465 } 00:20:48.465 ] 00:20:48.465 }' 00:20:48.465 16:59:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:48.465 16:59:37 -- common/autotest_common.sh@10 -- # set +x 00:20:49.032 16:59:37 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:49.291 [2024-11-05 16:59:38.105395] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:49.291 [2024-11-05 16:59:38.105620] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:49.291 [2024-11-05 16:59:38.118556] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca2e80 00:20:49.291 [2024-11-05 16:59:38.120752] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:49.291 16:59:38 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:50.667 16:59:39 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.667 16:59:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:50.667 16:59:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:50.667 16:59:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:50.667 16:59:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:50.667 16:59:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.667 16:59:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.667 16:59:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:50.667 "name": "raid_bdev1", 00:20:50.667 "uuid": "4a122392-a74c-4fa3-bd86-585b81d3ef97", 00:20:50.667 "strip_size_kb": 0, 00:20:50.667 "state": "online", 00:20:50.667 "raid_level": "raid1", 00:20:50.667 "superblock": true, 00:20:50.667 "num_base_bdevs": 2, 00:20:50.667 "num_base_bdevs_discovered": 2, 00:20:50.667 "num_base_bdevs_operational": 2, 00:20:50.667 "process": { 00:20:50.667 "type": "rebuild", 00:20:50.667 "target": "spare", 00:20:50.667 "progress": { 00:20:50.667 "blocks": 24576, 00:20:50.667 "percent": 38 00:20:50.667 } 00:20:50.667 }, 00:20:50.667 "base_bdevs_list": [ 00:20:50.667 { 00:20:50.667 "name": "spare", 00:20:50.667 "uuid": "4a4cf4d6-fb71-593c-8929-506839fece4b", 00:20:50.667 "is_configured": true, 00:20:50.667 "data_offset": 2048, 00:20:50.667 "data_size": 63488 00:20:50.667 }, 00:20:50.667 { 00:20:50.667 "name": "BaseBdev2", 00:20:50.667 "uuid": "b70b46be-04a4-5f7b-8109-69e5782621bf", 00:20:50.667 "is_configured": true, 00:20:50.667 "data_offset": 2048, 00:20:50.667 "data_size": 63488 00:20:50.667 } 00:20:50.667 ] 00:20:50.667 }' 00:20:50.667 16:59:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:50.667 16:59:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.667 16:59:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:50.667 16:59:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.667 16:59:39 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:50.933 [2024-11-05 16:59:39.682574] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:50.933 [2024-11-05 16:59:39.730550] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:50.933 [2024-11-05 16:59:39.731246] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.933 16:59:39 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:50.933 16:59:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:50.933 16:59:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:50.933 16:59:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:50.933 16:59:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:50.933 16:59:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:50.933 16:59:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:50.933 16:59:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:50.933 16:59:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:50.933 16:59:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:50.933 16:59:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.933 16:59:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.195 16:59:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:51.195 "name": "raid_bdev1", 00:20:51.195 "uuid": "4a122392-a74c-4fa3-bd86-585b81d3ef97", 00:20:51.195 "strip_size_kb": 0, 00:20:51.195 "state": "online", 00:20:51.195 "raid_level": "raid1", 00:20:51.195 "superblock": true, 00:20:51.195 "num_base_bdevs": 2, 00:20:51.195 "num_base_bdevs_discovered": 1, 00:20:51.195 "num_base_bdevs_operational": 1, 00:20:51.195 "base_bdevs_list": [ 00:20:51.195 { 00:20:51.195 "name": null, 00:20:51.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.195 "is_configured": false, 00:20:51.195 "data_offset": 2048, 00:20:51.195 "data_size": 63488 00:20:51.195 }, 00:20:51.195 { 00:20:51.195 "name": "BaseBdev2", 00:20:51.195 "uuid": "b70b46be-04a4-5f7b-8109-69e5782621bf", 00:20:51.195 "is_configured": true, 00:20:51.195 "data_offset": 2048, 00:20:51.195 "data_size": 63488 00:20:51.195 } 00:20:51.195 ] 00:20:51.195 }' 00:20:51.195 16:59:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:51.195 16:59:40 -- common/autotest_common.sh@10 -- # set +x 00:20:52.131 16:59:40 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:52.131 16:59:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:52.131 16:59:40 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:52.131 16:59:40 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:52.131 16:59:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:52.131 16:59:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.131 16:59:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.131 16:59:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:52.131 "name": "raid_bdev1", 00:20:52.131 "uuid": "4a122392-a74c-4fa3-bd86-585b81d3ef97", 00:20:52.131 "strip_size_kb": 0, 00:20:52.131 "state": "online", 00:20:52.131 "raid_level": "raid1", 00:20:52.131 "superblock": true, 00:20:52.131 "num_base_bdevs": 2, 00:20:52.131 "num_base_bdevs_discovered": 1, 00:20:52.131 "num_base_bdevs_operational": 1, 00:20:52.131 "base_bdevs_list": [ 00:20:52.131 { 00:20:52.131 "name": null, 00:20:52.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.131 "is_configured": false, 00:20:52.131 "data_offset": 2048, 00:20:52.131 "data_size": 63488 00:20:52.131 }, 00:20:52.131 { 00:20:52.131 "name": "BaseBdev2", 00:20:52.131 "uuid": "b70b46be-04a4-5f7b-8109-69e5782621bf", 00:20:52.131 "is_configured": true, 00:20:52.131 "data_offset": 2048, 00:20:52.131 "data_size": 63488 00:20:52.131 } 00:20:52.131 ] 00:20:52.131 }' 00:20:52.131 16:59:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:52.131 16:59:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:52.131 16:59:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:52.390 16:59:41 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:52.390 16:59:41 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:52.390 [2024-11-05 16:59:41.271377] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:52.390 [2024-11-05 16:59:41.271593] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:52.390 [2024-11-05 16:59:41.283907] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3020 00:20:52.390 [2024-11-05 16:59:41.286215] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:52.648 16:59:41 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:53.582 16:59:42 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.582 16:59:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:53.582 16:59:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:53.582 16:59:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:53.582 16:59:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:53.582 16:59:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.582 16:59:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:53.841 "name": "raid_bdev1", 00:20:53.841 "uuid": "4a122392-a74c-4fa3-bd86-585b81d3ef97", 00:20:53.841 "strip_size_kb": 0, 00:20:53.841 "state": "online", 00:20:53.841 "raid_level": "raid1", 00:20:53.841 "superblock": true, 00:20:53.841 "num_base_bdevs": 2, 00:20:53.841 "num_base_bdevs_discovered": 2, 00:20:53.841 "num_base_bdevs_operational": 2, 00:20:53.841 "process": { 00:20:53.841 "type": "rebuild", 00:20:53.841 "target": "spare", 00:20:53.841 "progress": { 00:20:53.841 "blocks": 24576, 00:20:53.841 "percent": 38 00:20:53.841 } 00:20:53.841 }, 00:20:53.841 "base_bdevs_list": [ 00:20:53.841 { 00:20:53.841 "name": "spare", 00:20:53.841 "uuid": "4a4cf4d6-fb71-593c-8929-506839fece4b", 00:20:53.841 "is_configured": true, 00:20:53.841 "data_offset": 2048, 00:20:53.841 "data_size": 63488 00:20:53.841 }, 00:20:53.841 { 00:20:53.841 "name": "BaseBdev2", 00:20:53.841 "uuid": "b70b46be-04a4-5f7b-8109-69e5782621bf", 00:20:53.841 "is_configured": true, 00:20:53.841 "data_offset": 2048, 00:20:53.841 "data_size": 63488 00:20:53.841 } 00:20:53.841 ] 00:20:53.841 }' 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:53.841 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@657 -- # local timeout=422 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.841 16:59:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.100 16:59:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:54.100 "name": "raid_bdev1", 00:20:54.100 "uuid": "4a122392-a74c-4fa3-bd86-585b81d3ef97", 00:20:54.100 "strip_size_kb": 0, 00:20:54.100 "state": "online", 00:20:54.100 "raid_level": "raid1", 00:20:54.100 "superblock": true, 00:20:54.100 "num_base_bdevs": 2, 00:20:54.100 "num_base_bdevs_discovered": 2, 00:20:54.100 "num_base_bdevs_operational": 2, 00:20:54.100 "process": { 00:20:54.100 "type": "rebuild", 00:20:54.100 "target": "spare", 00:20:54.100 "progress": { 00:20:54.100 "blocks": 32768, 00:20:54.100 "percent": 51 00:20:54.100 } 00:20:54.100 }, 00:20:54.100 "base_bdevs_list": [ 00:20:54.100 { 00:20:54.100 "name": "spare", 00:20:54.100 "uuid": "4a4cf4d6-fb71-593c-8929-506839fece4b", 00:20:54.100 "is_configured": true, 00:20:54.100 "data_offset": 2048, 00:20:54.100 "data_size": 63488 00:20:54.100 }, 00:20:54.100 { 00:20:54.100 "name": "BaseBdev2", 00:20:54.100 "uuid": "b70b46be-04a4-5f7b-8109-69e5782621bf", 00:20:54.100 "is_configured": true, 00:20:54.100 "data_offset": 2048, 00:20:54.100 "data_size": 63488 00:20:54.100 } 00:20:54.100 ] 00:20:54.100 }' 00:20:54.100 16:59:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:54.358 16:59:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.358 16:59:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:54.358 16:59:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:54.358 16:59:43 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:55.294 16:59:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:55.294 16:59:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.294 16:59:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:55.294 16:59:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:55.294 16:59:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:55.294 16:59:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:55.294 16:59:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.294 16:59:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.552 16:59:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:55.552 "name": "raid_bdev1", 00:20:55.552 "uuid": "4a122392-a74c-4fa3-bd86-585b81d3ef97", 00:20:55.552 "strip_size_kb": 0, 00:20:55.552 "state": "online", 00:20:55.552 "raid_level": "raid1", 00:20:55.552 "superblock": true, 00:20:55.552 "num_base_bdevs": 2, 00:20:55.552 "num_base_bdevs_discovered": 2, 00:20:55.553 "num_base_bdevs_operational": 2, 00:20:55.553 "process": { 00:20:55.553 "type": "rebuild", 00:20:55.553 "target": "spare", 00:20:55.553 "progress": { 00:20:55.553 "blocks": 61440, 00:20:55.553 "percent": 96 00:20:55.553 } 00:20:55.553 }, 00:20:55.553 "base_bdevs_list": [ 00:20:55.553 { 00:20:55.553 "name": "spare", 00:20:55.553 "uuid": "4a4cf4d6-fb71-593c-8929-506839fece4b", 00:20:55.553 "is_configured": true, 00:20:55.553 "data_offset": 2048, 00:20:55.553 "data_size": 63488 00:20:55.553 }, 00:20:55.553 { 00:20:55.553 "name": "BaseBdev2", 00:20:55.553 "uuid": "b70b46be-04a4-5f7b-8109-69e5782621bf", 00:20:55.553 "is_configured": true, 00:20:55.553 "data_offset": 2048, 00:20:55.553 "data_size": 63488 00:20:55.553 } 00:20:55.553 ] 00:20:55.553 }' 00:20:55.553 16:59:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:55.553 16:59:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:55.553 16:59:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:55.553 [2024-11-05 16:59:44.405857] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:55.553 [2024-11-05 16:59:44.406156] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:55.553 [2024-11-05 16:59:44.406415] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.553 16:59:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.553 16:59:44 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:56.928 "name": "raid_bdev1", 00:20:56.928 "uuid": "4a122392-a74c-4fa3-bd86-585b81d3ef97", 00:20:56.928 "strip_size_kb": 0, 00:20:56.928 "state": "online", 00:20:56.928 "raid_level": "raid1", 00:20:56.928 "superblock": true, 00:20:56.928 "num_base_bdevs": 2, 00:20:56.928 "num_base_bdevs_discovered": 2, 00:20:56.928 "num_base_bdevs_operational": 2, 00:20:56.928 "base_bdevs_list": [ 00:20:56.928 { 00:20:56.928 "name": "spare", 00:20:56.928 "uuid": "4a4cf4d6-fb71-593c-8929-506839fece4b", 00:20:56.928 "is_configured": true, 00:20:56.928 "data_offset": 2048, 00:20:56.928 "data_size": 63488 00:20:56.928 }, 00:20:56.928 { 00:20:56.928 "name": "BaseBdev2", 00:20:56.928 "uuid": "b70b46be-04a4-5f7b-8109-69e5782621bf", 00:20:56.928 "is_configured": true, 00:20:56.928 "data_offset": 2048, 00:20:56.928 "data_size": 63488 00:20:56.928 } 00:20:56.928 ] 00:20:56.928 }' 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@660 -- # break 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.928 16:59:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.187 16:59:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:57.187 "name": "raid_bdev1", 00:20:57.187 "uuid": "4a122392-a74c-4fa3-bd86-585b81d3ef97", 00:20:57.187 "strip_size_kb": 0, 00:20:57.187 "state": "online", 00:20:57.187 "raid_level": "raid1", 00:20:57.187 "superblock": true, 00:20:57.187 "num_base_bdevs": 2, 00:20:57.187 "num_base_bdevs_discovered": 2, 00:20:57.187 "num_base_bdevs_operational": 2, 00:20:57.187 "base_bdevs_list": [ 00:20:57.187 { 00:20:57.187 "name": "spare", 00:20:57.187 "uuid": "4a4cf4d6-fb71-593c-8929-506839fece4b", 00:20:57.187 "is_configured": true, 00:20:57.187 "data_offset": 2048, 00:20:57.187 "data_size": 63488 00:20:57.187 }, 00:20:57.187 { 00:20:57.187 "name": "BaseBdev2", 00:20:57.187 "uuid": "b70b46be-04a4-5f7b-8109-69e5782621bf", 00:20:57.187 "is_configured": true, 00:20:57.187 "data_offset": 2048, 00:20:57.187 "data_size": 63488 00:20:57.187 } 00:20:57.187 ] 00:20:57.187 }' 00:20:57.187 16:59:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:57.446 16:59:46 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:57.446 16:59:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:57.446 16:59:46 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:57.446 16:59:46 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:57.446 16:59:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:57.446 16:59:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:57.446 16:59:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:57.446 16:59:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:57.446 16:59:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:57.446 16:59:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:57.446 16:59:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:57.446 16:59:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:57.446 16:59:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:57.446 16:59:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.446 16:59:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.705 16:59:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:57.705 "name": "raid_bdev1", 00:20:57.705 "uuid": "4a122392-a74c-4fa3-bd86-585b81d3ef97", 00:20:57.705 "strip_size_kb": 0, 00:20:57.705 "state": "online", 00:20:57.705 "raid_level": "raid1", 00:20:57.705 "superblock": true, 00:20:57.705 "num_base_bdevs": 2, 00:20:57.705 "num_base_bdevs_discovered": 2, 00:20:57.705 "num_base_bdevs_operational": 2, 00:20:57.705 "base_bdevs_list": [ 00:20:57.705 { 00:20:57.705 "name": "spare", 00:20:57.705 "uuid": "4a4cf4d6-fb71-593c-8929-506839fece4b", 00:20:57.705 "is_configured": true, 00:20:57.705 "data_offset": 2048, 00:20:57.705 "data_size": 63488 00:20:57.705 }, 00:20:57.705 { 00:20:57.705 "name": "BaseBdev2", 00:20:57.705 "uuid": "b70b46be-04a4-5f7b-8109-69e5782621bf", 00:20:57.705 "is_configured": true, 00:20:57.705 "data_offset": 2048, 00:20:57.705 "data_size": 63488 00:20:57.705 } 00:20:57.705 ] 00:20:57.705 }' 00:20:57.705 16:59:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:57.705 16:59:46 -- common/autotest_common.sh@10 -- # set +x 00:20:58.272 16:59:47 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:58.534 [2024-11-05 16:59:47.236942] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:58.534 [2024-11-05 16:59:47.237168] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:58.534 [2024-11-05 16:59:47.237458] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:58.534 [2024-11-05 16:59:47.237684] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:58.534 [2024-11-05 16:59:47.237839] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:20:58.534 16:59:47 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:58.534 16:59:47 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.793 16:59:47 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:58.793 16:59:47 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:58.793 16:59:47 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:58.793 16:59:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:58.793 16:59:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:58.793 16:59:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:58.793 16:59:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:58.793 16:59:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:58.793 16:59:47 -- bdev/nbd_common.sh@12 -- # local i 00:20:58.793 16:59:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:58.793 16:59:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:58.793 16:59:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:59.051 /dev/nbd0 00:20:59.051 16:59:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:59.051 16:59:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:59.051 16:59:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:59.051 16:59:47 -- common/autotest_common.sh@867 -- # local i 00:20:59.051 16:59:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:59.051 16:59:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:59.051 16:59:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:59.051 16:59:47 -- common/autotest_common.sh@871 -- # break 00:20:59.051 16:59:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:59.051 16:59:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:59.051 16:59:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:59.051 1+0 records in 00:20:59.051 1+0 records out 00:20:59.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638812 s, 6.4 MB/s 00:20:59.052 16:59:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:59.052 16:59:47 -- common/autotest_common.sh@884 -- # size=4096 00:20:59.052 16:59:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:59.052 16:59:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:59.052 16:59:47 -- common/autotest_common.sh@887 -- # return 0 00:20:59.052 16:59:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:59.052 16:59:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:59.052 16:59:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:59.310 /dev/nbd1 00:20:59.310 16:59:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:59.310 16:59:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:59.310 16:59:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:59.310 16:59:48 -- common/autotest_common.sh@867 -- # local i 00:20:59.310 16:59:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:59.310 16:59:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:59.310 16:59:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:59.310 16:59:48 -- common/autotest_common.sh@871 -- # break 00:20:59.310 16:59:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:59.310 16:59:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:59.310 16:59:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:59.310 1+0 records in 00:20:59.310 1+0 records out 00:20:59.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581925 s, 7.0 MB/s 00:20:59.310 16:59:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:59.310 16:59:48 -- common/autotest_common.sh@884 -- # size=4096 00:20:59.310 16:59:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:59.310 16:59:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:59.310 16:59:48 -- common/autotest_common.sh@887 -- # return 0 00:20:59.310 16:59:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:59.310 16:59:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:59.310 16:59:48 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:59.569 16:59:48 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:59.569 16:59:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:59.569 16:59:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:59.569 16:59:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:59.569 16:59:48 -- bdev/nbd_common.sh@51 -- # local i 00:20:59.569 16:59:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:59.569 16:59:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:59.828 16:59:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:59.828 16:59:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:59.828 16:59:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:59.828 16:59:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:59.828 16:59:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:59.828 16:59:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:59.828 16:59:48 -- bdev/nbd_common.sh@41 -- # break 00:20:59.828 16:59:48 -- bdev/nbd_common.sh@45 -- # return 0 00:20:59.828 16:59:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:59.828 16:59:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:00.087 16:59:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:00.087 16:59:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:00.087 16:59:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:00.087 16:59:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:00.087 16:59:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:00.087 16:59:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:00.087 16:59:48 -- bdev/nbd_common.sh@41 -- # break 00:21:00.087 16:59:48 -- bdev/nbd_common.sh@45 -- # return 0 00:21:00.087 16:59:48 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:00.087 16:59:48 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:00.087 16:59:48 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:00.087 16:59:48 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:00.345 16:59:49 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:00.604 [2024-11-05 16:59:49.325558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:00.604 [2024-11-05 16:59:49.325873] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.604 [2024-11-05 16:59:49.326064] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:21:00.604 [2024-11-05 16:59:49.326232] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.604 [2024-11-05 16:59:49.328405] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.604 [2024-11-05 16:59:49.328616] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:00.604 [2024-11-05 16:59:49.328843] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:00.604 [2024-11-05 16:59:49.329003] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:00.604 BaseBdev1 00:21:00.604 16:59:49 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:00.604 16:59:49 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:00.604 16:59:49 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:00.863 16:59:49 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:01.121 [2024-11-05 16:59:49.873753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:01.121 [2024-11-05 16:59:49.874049] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.121 [2024-11-05 16:59:49.874224] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:01.121 [2024-11-05 16:59:49.874354] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.121 [2024-11-05 16:59:49.875104] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.121 [2024-11-05 16:59:49.875344] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:01.121 [2024-11-05 16:59:49.875586] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:01.121 [2024-11-05 16:59:49.875712] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:01.121 [2024-11-05 16:59:49.875897] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:01.121 [2024-11-05 16:59:49.876100] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state configuring 00:21:01.121 [2024-11-05 16:59:49.876268] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:01.121 BaseBdev2 00:21:01.121 16:59:49 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:01.380 16:59:50 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:01.640 [2024-11-05 16:59:50.369938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:01.640 [2024-11-05 16:59:50.370208] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.640 [2024-11-05 16:59:50.370294] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:01.640 [2024-11-05 16:59:50.370499] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.640 [2024-11-05 16:59:50.371228] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.640 [2024-11-05 16:59:50.371456] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:01.640 [2024-11-05 16:59:50.371688] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:01.640 [2024-11-05 16:59:50.371879] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:01.640 spare 00:21:01.640 16:59:50 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:01.640 16:59:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:01.640 16:59:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:01.640 16:59:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:01.640 16:59:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:01.640 16:59:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:01.640 16:59:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:01.640 16:59:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:01.640 16:59:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:01.640 16:59:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:01.640 16:59:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.640 16:59:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.640 [2024-11-05 16:59:50.472133] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:21:01.640 [2024-11-05 16:59:50.472318] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:01.640 [2024-11-05 16:59:50.472491] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:21:01.640 [2024-11-05 16:59:50.473139] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:21:01.640 [2024-11-05 16:59:50.473366] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:21:01.640 [2024-11-05 16:59:50.473645] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:01.899 16:59:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:01.899 "name": "raid_bdev1", 00:21:01.899 "uuid": "4a122392-a74c-4fa3-bd86-585b81d3ef97", 00:21:01.899 "strip_size_kb": 0, 00:21:01.899 "state": "online", 00:21:01.899 "raid_level": "raid1", 00:21:01.899 "superblock": true, 00:21:01.899 "num_base_bdevs": 2, 00:21:01.899 "num_base_bdevs_discovered": 2, 00:21:01.899 "num_base_bdevs_operational": 2, 00:21:01.899 "base_bdevs_list": [ 00:21:01.899 { 00:21:01.899 "name": "spare", 00:21:01.899 "uuid": "4a4cf4d6-fb71-593c-8929-506839fece4b", 00:21:01.899 "is_configured": true, 00:21:01.899 "data_offset": 2048, 00:21:01.899 "data_size": 63488 00:21:01.899 }, 00:21:01.899 { 00:21:01.899 "name": "BaseBdev2", 00:21:01.899 "uuid": "b70b46be-04a4-5f7b-8109-69e5782621bf", 00:21:01.899 "is_configured": true, 00:21:01.899 "data_offset": 2048, 00:21:01.899 "data_size": 63488 00:21:01.899 } 00:21:01.899 ] 00:21:01.899 }' 00:21:01.899 16:59:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:01.899 16:59:50 -- common/autotest_common.sh@10 -- # set +x 00:21:02.465 16:59:51 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:02.465 16:59:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:02.465 16:59:51 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:02.465 16:59:51 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:02.465 16:59:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:02.465 16:59:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.465 16:59:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.724 16:59:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:02.724 "name": "raid_bdev1", 00:21:02.724 "uuid": "4a122392-a74c-4fa3-bd86-585b81d3ef97", 00:21:02.724 "strip_size_kb": 0, 00:21:02.724 "state": "online", 00:21:02.724 "raid_level": "raid1", 00:21:02.724 "superblock": true, 00:21:02.724 "num_base_bdevs": 2, 00:21:02.724 "num_base_bdevs_discovered": 2, 00:21:02.724 "num_base_bdevs_operational": 2, 00:21:02.724 "base_bdevs_list": [ 00:21:02.724 { 00:21:02.724 "name": "spare", 00:21:02.724 "uuid": "4a4cf4d6-fb71-593c-8929-506839fece4b", 00:21:02.724 "is_configured": true, 00:21:02.724 "data_offset": 2048, 00:21:02.724 "data_size": 63488 00:21:02.724 }, 00:21:02.724 { 00:21:02.724 "name": "BaseBdev2", 00:21:02.724 "uuid": "b70b46be-04a4-5f7b-8109-69e5782621bf", 00:21:02.724 "is_configured": true, 00:21:02.724 "data_offset": 2048, 00:21:02.724 "data_size": 63488 00:21:02.724 } 00:21:02.724 ] 00:21:02.724 }' 00:21:02.724 16:59:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:02.982 16:59:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:02.982 16:59:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:02.982 16:59:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:02.983 16:59:51 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.983 16:59:51 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:03.241 16:59:51 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:03.241 16:59:51 -- bdev/bdev_raid.sh@709 -- # killprocess 122960 00:21:03.241 16:59:51 -- common/autotest_common.sh@936 -- # '[' -z 122960 ']' 00:21:03.241 16:59:51 -- common/autotest_common.sh@940 -- # kill -0 122960 00:21:03.241 16:59:51 -- common/autotest_common.sh@941 -- # uname 00:21:03.241 16:59:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:03.241 16:59:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122960 00:21:03.241 killing process with pid 122960 00:21:03.241 Received shutdown signal, test time was about 60.000000 seconds 00:21:03.241 00:21:03.241 Latency(us) 00:21:03.241 [2024-11-05T16:59:52.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.241 [2024-11-05T16:59:52.118Z] =================================================================================================================== 00:21:03.241 [2024-11-05T16:59:52.118Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:03.241 16:59:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:03.241 16:59:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:03.241 16:59:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122960' 00:21:03.241 16:59:52 -- common/autotest_common.sh@955 -- # kill 122960 00:21:03.241 16:59:52 -- common/autotest_common.sh@960 -- # wait 122960 00:21:03.241 [2024-11-05 16:59:52.004653] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:03.241 [2024-11-05 16:59:52.004736] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:03.241 [2024-11-05 16:59:52.004816] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:03.242 [2024-11-05 16:59:52.004833] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:21:03.500 [2024-11-05 16:59:52.271953] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:04.877 ************************************ 00:21:04.877 END TEST raid_rebuild_test_sb 00:21:04.877 ************************************ 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:04.877 00:21:04.877 real 0m26.963s 00:21:04.877 user 0m39.106s 00:21:04.877 sys 0m4.296s 00:21:04.877 16:59:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:04.877 16:59:53 -- common/autotest_common.sh@10 -- # set +x 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:21:04.877 16:59:53 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:21:04.877 16:59:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:04.877 16:59:53 -- common/autotest_common.sh@10 -- # set +x 00:21:04.877 ************************************ 00:21:04.877 START TEST raid_rebuild_test_io 00:21:04.877 ************************************ 00:21:04.877 16:59:53 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false true 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@544 -- # raid_pid=123606 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:04.877 16:59:53 -- bdev/bdev_raid.sh@545 -- # waitforlisten 123606 /var/tmp/spdk-raid.sock 00:21:04.877 16:59:53 -- common/autotest_common.sh@829 -- # '[' -z 123606 ']' 00:21:04.877 16:59:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:04.877 16:59:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.877 16:59:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:04.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:04.877 16:59:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.877 16:59:53 -- common/autotest_common.sh@10 -- # set +x 00:21:04.877 [2024-11-05 16:59:53.700603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:04.877 [2024-11-05 16:59:53.701090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123606 ] 00:21:04.877 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:04.877 Zero copy mechanism will not be used. 00:21:05.136 [2024-11-05 16:59:53.888349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.395 [2024-11-05 16:59:54.135587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.653 [2024-11-05 16:59:54.368066] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:05.945 16:59:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.945 16:59:54 -- common/autotest_common.sh@862 -- # return 0 00:21:05.945 16:59:54 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:05.945 16:59:54 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:05.945 16:59:54 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:06.213 BaseBdev1 00:21:06.213 16:59:55 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:06.213 16:59:55 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:06.213 16:59:55 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:06.781 BaseBdev2 00:21:06.781 16:59:55 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:07.039 spare_malloc 00:21:07.039 16:59:55 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:07.297 spare_delay 00:21:07.297 16:59:56 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:07.556 [2024-11-05 16:59:56.200861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:07.556 [2024-11-05 16:59:56.201089] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.556 [2024-11-05 16:59:56.201159] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:21:07.556 [2024-11-05 16:59:56.201451] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.556 [2024-11-05 16:59:56.203857] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.556 [2024-11-05 16:59:56.204025] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:07.556 spare 00:21:07.556 16:59:56 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:07.556 [2024-11-05 16:59:56.396982] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:07.556 [2024-11-05 16:59:56.399085] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:07.556 [2024-11-05 16:59:56.399357] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:21:07.556 [2024-11-05 16:59:56.399520] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:07.556 [2024-11-05 16:59:56.399720] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:21:07.556 [2024-11-05 16:59:56.400212] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:21:07.556 [2024-11-05 16:59:56.400354] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:21:07.556 [2024-11-05 16:59:56.400603] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.556 16:59:56 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:07.556 16:59:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:07.556 16:59:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:07.556 16:59:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:07.556 16:59:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:07.556 16:59:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:07.556 16:59:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:07.556 16:59:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:07.556 16:59:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:07.556 16:59:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:07.556 16:59:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.556 16:59:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.815 16:59:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:07.815 "name": "raid_bdev1", 00:21:07.815 "uuid": "0c232a94-f39b-4de0-b138-32c58fc0e12a", 00:21:07.815 "strip_size_kb": 0, 00:21:07.815 "state": "online", 00:21:07.815 "raid_level": "raid1", 00:21:07.815 "superblock": false, 00:21:07.815 "num_base_bdevs": 2, 00:21:07.815 "num_base_bdevs_discovered": 2, 00:21:07.815 "num_base_bdevs_operational": 2, 00:21:07.815 "base_bdevs_list": [ 00:21:07.815 { 00:21:07.815 "name": "BaseBdev1", 00:21:07.815 "uuid": "e8b0376d-4af8-430b-b1c7-779bfe23efa3", 00:21:07.815 "is_configured": true, 00:21:07.815 "data_offset": 0, 00:21:07.815 "data_size": 65536 00:21:07.815 }, 00:21:07.815 { 00:21:07.815 "name": "BaseBdev2", 00:21:07.815 "uuid": "c187e3c0-5d19-4c37-af9c-52e6bb054b73", 00:21:07.815 "is_configured": true, 00:21:07.815 "data_offset": 0, 00:21:07.815 "data_size": 65536 00:21:07.815 } 00:21:07.815 ] 00:21:07.815 }' 00:21:07.815 16:59:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:07.815 16:59:56 -- common/autotest_common.sh@10 -- # set +x 00:21:08.381 16:59:57 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:08.381 16:59:57 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:08.640 [2024-11-05 16:59:57.433381] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:08.640 16:59:57 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:08.640 16:59:57 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.640 16:59:57 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:08.899 16:59:57 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:08.899 16:59:57 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:08.899 16:59:57 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:08.899 16:59:57 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:08.899 [2024-11-05 16:59:57.757171] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:08.899 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:08.899 Zero copy mechanism will not be used. 00:21:08.899 Running I/O for 60 seconds... 00:21:09.157 [2024-11-05 16:59:57.851214] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:09.157 [2024-11-05 16:59:57.863827] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005860 00:21:09.157 16:59:57 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:09.157 16:59:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:09.157 16:59:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:09.157 16:59:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:09.157 16:59:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:09.157 16:59:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:09.157 16:59:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:09.157 16:59:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:09.157 16:59:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:09.157 16:59:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:09.157 16:59:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.157 16:59:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.416 16:59:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:09.416 "name": "raid_bdev1", 00:21:09.416 "uuid": "0c232a94-f39b-4de0-b138-32c58fc0e12a", 00:21:09.416 "strip_size_kb": 0, 00:21:09.416 "state": "online", 00:21:09.416 "raid_level": "raid1", 00:21:09.416 "superblock": false, 00:21:09.416 "num_base_bdevs": 2, 00:21:09.416 "num_base_bdevs_discovered": 1, 00:21:09.416 "num_base_bdevs_operational": 1, 00:21:09.416 "base_bdevs_list": [ 00:21:09.416 { 00:21:09.416 "name": null, 00:21:09.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.416 "is_configured": false, 00:21:09.416 "data_offset": 0, 00:21:09.416 "data_size": 65536 00:21:09.416 }, 00:21:09.416 { 00:21:09.416 "name": "BaseBdev2", 00:21:09.416 "uuid": "c187e3c0-5d19-4c37-af9c-52e6bb054b73", 00:21:09.416 "is_configured": true, 00:21:09.416 "data_offset": 0, 00:21:09.416 "data_size": 65536 00:21:09.416 } 00:21:09.416 ] 00:21:09.416 }' 00:21:09.416 16:59:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:09.416 16:59:58 -- common/autotest_common.sh@10 -- # set +x 00:21:09.984 16:59:58 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:10.242 [2024-11-05 16:59:58.991532] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:10.242 [2024-11-05 16:59:58.991780] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:10.242 16:59:59 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:10.242 [2024-11-05 16:59:59.042260] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:10.242 [2024-11-05 16:59:59.044301] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:10.501 [2024-11-05 16:59:59.159244] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:10.501 [2024-11-05 16:59:59.159870] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:10.501 [2024-11-05 16:59:59.379270] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:10.501 [2024-11-05 16:59:59.379563] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:11.079 [2024-11-05 16:59:59.697485] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:11.079 [2024-11-05 16:59:59.912125] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:11.079 [2024-11-05 16:59:59.912477] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:11.337 17:00:00 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:11.337 17:00:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:11.338 17:00:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:11.338 17:00:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:11.338 17:00:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:11.338 17:00:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.338 17:00:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.596 [2024-11-05 17:00:00.270468] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:11.596 [2024-11-05 17:00:00.278289] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:11.596 17:00:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:11.596 "name": "raid_bdev1", 00:21:11.596 "uuid": "0c232a94-f39b-4de0-b138-32c58fc0e12a", 00:21:11.596 "strip_size_kb": 0, 00:21:11.596 "state": "online", 00:21:11.596 "raid_level": "raid1", 00:21:11.596 "superblock": false, 00:21:11.596 "num_base_bdevs": 2, 00:21:11.596 "num_base_bdevs_discovered": 2, 00:21:11.596 "num_base_bdevs_operational": 2, 00:21:11.596 "process": { 00:21:11.596 "type": "rebuild", 00:21:11.596 "target": "spare", 00:21:11.596 "progress": { 00:21:11.596 "blocks": 14336, 00:21:11.596 "percent": 21 00:21:11.596 } 00:21:11.596 }, 00:21:11.596 "base_bdevs_list": [ 00:21:11.596 { 00:21:11.596 "name": "spare", 00:21:11.596 "uuid": "ebb47ef9-2e24-5d57-b17c-dd119694c9e0", 00:21:11.596 "is_configured": true, 00:21:11.596 "data_offset": 0, 00:21:11.596 "data_size": 65536 00:21:11.596 }, 00:21:11.596 { 00:21:11.596 "name": "BaseBdev2", 00:21:11.596 "uuid": "c187e3c0-5d19-4c37-af9c-52e6bb054b73", 00:21:11.596 "is_configured": true, 00:21:11.596 "data_offset": 0, 00:21:11.596 "data_size": 65536 00:21:11.596 } 00:21:11.596 ] 00:21:11.596 }' 00:21:11.596 17:00:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:11.596 17:00:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:11.596 17:00:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:11.596 17:00:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:11.597 17:00:00 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:11.856 [2024-11-05 17:00:00.509762] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:11.856 [2024-11-05 17:00:00.510246] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:11.856 [2024-11-05 17:00:00.619621] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:12.115 [2024-11-05 17:00:00.814176] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:12.115 [2024-11-05 17:00:00.816487] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:12.115 [2024-11-05 17:00:00.856351] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005860 00:21:12.115 17:00:00 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:12.115 17:00:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:12.115 17:00:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:12.115 17:00:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:12.115 17:00:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:12.115 17:00:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:12.115 17:00:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:12.115 17:00:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:12.115 17:00:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:12.115 17:00:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:12.115 17:00:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.115 17:00:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.373 17:00:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:12.373 "name": "raid_bdev1", 00:21:12.373 "uuid": "0c232a94-f39b-4de0-b138-32c58fc0e12a", 00:21:12.373 "strip_size_kb": 0, 00:21:12.373 "state": "online", 00:21:12.373 "raid_level": "raid1", 00:21:12.373 "superblock": false, 00:21:12.373 "num_base_bdevs": 2, 00:21:12.373 "num_base_bdevs_discovered": 1, 00:21:12.373 "num_base_bdevs_operational": 1, 00:21:12.373 "base_bdevs_list": [ 00:21:12.373 { 00:21:12.373 "name": null, 00:21:12.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.374 "is_configured": false, 00:21:12.374 "data_offset": 0, 00:21:12.374 "data_size": 65536 00:21:12.374 }, 00:21:12.374 { 00:21:12.374 "name": "BaseBdev2", 00:21:12.374 "uuid": "c187e3c0-5d19-4c37-af9c-52e6bb054b73", 00:21:12.374 "is_configured": true, 00:21:12.374 "data_offset": 0, 00:21:12.374 "data_size": 65536 00:21:12.374 } 00:21:12.374 ] 00:21:12.374 }' 00:21:12.374 17:00:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:12.374 17:00:01 -- common/autotest_common.sh@10 -- # set +x 00:21:12.940 17:00:01 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:12.940 17:00:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:12.940 17:00:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:12.940 17:00:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:12.940 17:00:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:12.940 17:00:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.940 17:00:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.198 17:00:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:13.198 "name": "raid_bdev1", 00:21:13.198 "uuid": "0c232a94-f39b-4de0-b138-32c58fc0e12a", 00:21:13.198 "strip_size_kb": 0, 00:21:13.198 "state": "online", 00:21:13.198 "raid_level": "raid1", 00:21:13.198 "superblock": false, 00:21:13.198 "num_base_bdevs": 2, 00:21:13.198 "num_base_bdevs_discovered": 1, 00:21:13.198 "num_base_bdevs_operational": 1, 00:21:13.198 "base_bdevs_list": [ 00:21:13.198 { 00:21:13.198 "name": null, 00:21:13.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.198 "is_configured": false, 00:21:13.198 "data_offset": 0, 00:21:13.198 "data_size": 65536 00:21:13.198 }, 00:21:13.198 { 00:21:13.198 "name": "BaseBdev2", 00:21:13.198 "uuid": "c187e3c0-5d19-4c37-af9c-52e6bb054b73", 00:21:13.198 "is_configured": true, 00:21:13.198 "data_offset": 0, 00:21:13.198 "data_size": 65536 00:21:13.198 } 00:21:13.198 ] 00:21:13.198 }' 00:21:13.198 17:00:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:13.457 17:00:02 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:13.457 17:00:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:13.457 17:00:02 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:13.457 17:00:02 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:13.716 [2024-11-05 17:00:02.388708] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:13.716 [2024-11-05 17:00:02.389052] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:13.716 [2024-11-05 17:00:02.440719] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:13.716 [2024-11-05 17:00:02.443047] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:13.716 17:00:02 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:13.716 [2024-11-05 17:00:02.571781] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:13.982 [2024-11-05 17:00:02.796940] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:13.983 [2024-11-05 17:00:02.797575] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:14.549 [2024-11-05 17:00:03.159119] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:14.549 [2024-11-05 17:00:03.273329] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:14.807 17:00:03 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:14.807 17:00:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:14.807 17:00:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:14.807 17:00:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:14.807 17:00:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:14.807 17:00:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.807 17:00:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.807 [2024-11-05 17:00:03.528838] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:14.807 [2024-11-05 17:00:03.662768] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:14.807 17:00:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:14.807 "name": "raid_bdev1", 00:21:14.807 "uuid": "0c232a94-f39b-4de0-b138-32c58fc0e12a", 00:21:14.807 "strip_size_kb": 0, 00:21:14.807 "state": "online", 00:21:14.807 "raid_level": "raid1", 00:21:14.807 "superblock": false, 00:21:14.807 "num_base_bdevs": 2, 00:21:14.807 "num_base_bdevs_discovered": 2, 00:21:14.807 "num_base_bdevs_operational": 2, 00:21:14.807 "process": { 00:21:14.807 "type": "rebuild", 00:21:14.807 "target": "spare", 00:21:14.807 "progress": { 00:21:14.807 "blocks": 14336, 00:21:14.807 "percent": 21 00:21:14.807 } 00:21:14.807 }, 00:21:14.807 "base_bdevs_list": [ 00:21:14.807 { 00:21:14.807 "name": "spare", 00:21:14.807 "uuid": "ebb47ef9-2e24-5d57-b17c-dd119694c9e0", 00:21:14.807 "is_configured": true, 00:21:14.807 "data_offset": 0, 00:21:14.807 "data_size": 65536 00:21:14.807 }, 00:21:14.807 { 00:21:14.807 "name": "BaseBdev2", 00:21:14.807 "uuid": "c187e3c0-5d19-4c37-af9c-52e6bb054b73", 00:21:14.807 "is_configured": true, 00:21:14.807 "data_offset": 0, 00:21:14.807 "data_size": 65536 00:21:14.807 } 00:21:14.807 ] 00:21:14.807 }' 00:21:14.807 17:00:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:15.065 17:00:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:15.065 17:00:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:15.065 17:00:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.065 17:00:03 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:15.065 17:00:03 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:15.065 17:00:03 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:15.065 17:00:03 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:15.065 17:00:03 -- bdev/bdev_raid.sh@657 -- # local timeout=443 00:21:15.065 17:00:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:15.065 17:00:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.065 17:00:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:15.065 17:00:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:15.065 17:00:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:15.065 17:00:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:15.065 17:00:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.065 17:00:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.065 [2024-11-05 17:00:03.912093] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:15.065 [2024-11-05 17:00:03.912824] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:15.324 17:00:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:15.324 "name": "raid_bdev1", 00:21:15.324 "uuid": "0c232a94-f39b-4de0-b138-32c58fc0e12a", 00:21:15.324 "strip_size_kb": 0, 00:21:15.324 "state": "online", 00:21:15.324 "raid_level": "raid1", 00:21:15.324 "superblock": false, 00:21:15.324 "num_base_bdevs": 2, 00:21:15.324 "num_base_bdevs_discovered": 2, 00:21:15.324 "num_base_bdevs_operational": 2, 00:21:15.324 "process": { 00:21:15.324 "type": "rebuild", 00:21:15.324 "target": "spare", 00:21:15.324 "progress": { 00:21:15.324 "blocks": 20480, 00:21:15.324 "percent": 31 00:21:15.324 } 00:21:15.324 }, 00:21:15.324 "base_bdevs_list": [ 00:21:15.324 { 00:21:15.324 "name": "spare", 00:21:15.324 "uuid": "ebb47ef9-2e24-5d57-b17c-dd119694c9e0", 00:21:15.324 "is_configured": true, 00:21:15.324 "data_offset": 0, 00:21:15.324 "data_size": 65536 00:21:15.324 }, 00:21:15.324 { 00:21:15.324 "name": "BaseBdev2", 00:21:15.324 "uuid": "c187e3c0-5d19-4c37-af9c-52e6bb054b73", 00:21:15.324 "is_configured": true, 00:21:15.324 "data_offset": 0, 00:21:15.324 "data_size": 65536 00:21:15.324 } 00:21:15.324 ] 00:21:15.324 }' 00:21:15.324 17:00:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:15.324 17:00:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:15.324 17:00:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:15.324 [2024-11-05 17:00:04.129690] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:15.324 17:00:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.324 17:00:04 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:15.891 [2024-11-05 17:00:04.603488] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:16.150 [2024-11-05 17:00:04.960221] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:16.409 17:00:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:16.409 17:00:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.409 17:00:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:16.409 17:00:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:16.409 17:00:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:16.409 17:00:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:16.409 [2024-11-05 17:00:05.180420] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:16.409 [2024-11-05 17:00:05.180763] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:16.409 17:00:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.409 17:00:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.668 17:00:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:16.668 "name": "raid_bdev1", 00:21:16.668 "uuid": "0c232a94-f39b-4de0-b138-32c58fc0e12a", 00:21:16.668 "strip_size_kb": 0, 00:21:16.668 "state": "online", 00:21:16.668 "raid_level": "raid1", 00:21:16.668 "superblock": false, 00:21:16.668 "num_base_bdevs": 2, 00:21:16.668 "num_base_bdevs_discovered": 2, 00:21:16.668 "num_base_bdevs_operational": 2, 00:21:16.668 "process": { 00:21:16.668 "type": "rebuild", 00:21:16.668 "target": "spare", 00:21:16.668 "progress": { 00:21:16.668 "blocks": 34816, 00:21:16.668 "percent": 53 00:21:16.668 } 00:21:16.668 }, 00:21:16.668 "base_bdevs_list": [ 00:21:16.668 { 00:21:16.668 "name": "spare", 00:21:16.668 "uuid": "ebb47ef9-2e24-5d57-b17c-dd119694c9e0", 00:21:16.668 "is_configured": true, 00:21:16.668 "data_offset": 0, 00:21:16.668 "data_size": 65536 00:21:16.668 }, 00:21:16.668 { 00:21:16.668 "name": "BaseBdev2", 00:21:16.668 "uuid": "c187e3c0-5d19-4c37-af9c-52e6bb054b73", 00:21:16.668 "is_configured": true, 00:21:16.668 "data_offset": 0, 00:21:16.668 "data_size": 65536 00:21:16.668 } 00:21:16.668 ] 00:21:16.668 }' 00:21:16.668 17:00:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:16.668 17:00:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:16.668 17:00:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:16.668 17:00:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.668 17:00:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:16.927 [2024-11-05 17:00:05.616549] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:17.863 17:00:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:17.863 17:00:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.863 17:00:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:17.863 17:00:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:17.863 17:00:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:17.863 17:00:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:17.863 17:00:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.863 17:00:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.863 [2024-11-05 17:00:06.703998] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:18.122 17:00:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:18.122 "name": "raid_bdev1", 00:21:18.122 "uuid": "0c232a94-f39b-4de0-b138-32c58fc0e12a", 00:21:18.122 "strip_size_kb": 0, 00:21:18.122 "state": "online", 00:21:18.122 "raid_level": "raid1", 00:21:18.122 "superblock": false, 00:21:18.122 "num_base_bdevs": 2, 00:21:18.122 "num_base_bdevs_discovered": 2, 00:21:18.122 "num_base_bdevs_operational": 2, 00:21:18.122 "process": { 00:21:18.122 "type": "rebuild", 00:21:18.122 "target": "spare", 00:21:18.122 "progress": { 00:21:18.122 "blocks": 59392, 00:21:18.122 "percent": 90 00:21:18.122 } 00:21:18.122 }, 00:21:18.122 "base_bdevs_list": [ 00:21:18.122 { 00:21:18.122 "name": "spare", 00:21:18.122 "uuid": "ebb47ef9-2e24-5d57-b17c-dd119694c9e0", 00:21:18.122 "is_configured": true, 00:21:18.122 "data_offset": 0, 00:21:18.122 "data_size": 65536 00:21:18.122 }, 00:21:18.122 { 00:21:18.122 "name": "BaseBdev2", 00:21:18.122 "uuid": "c187e3c0-5d19-4c37-af9c-52e6bb054b73", 00:21:18.122 "is_configured": true, 00:21:18.122 "data_offset": 0, 00:21:18.122 "data_size": 65536 00:21:18.122 } 00:21:18.122 ] 00:21:18.122 }' 00:21:18.122 17:00:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:18.122 17:00:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:18.122 17:00:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:18.122 17:00:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:18.122 17:00:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:18.381 [2024-11-05 17:00:07.033660] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:18.381 [2024-11-05 17:00:07.139373] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:18.381 [2024-11-05 17:00:07.141050] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.316 17:00:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:19.317 17:00:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.317 17:00:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:19.317 17:00:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:19.317 17:00:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:19.317 17:00:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:19.317 17:00:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.317 17:00:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.317 17:00:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:19.317 "name": "raid_bdev1", 00:21:19.317 "uuid": "0c232a94-f39b-4de0-b138-32c58fc0e12a", 00:21:19.317 "strip_size_kb": 0, 00:21:19.317 "state": "online", 00:21:19.317 "raid_level": "raid1", 00:21:19.317 "superblock": false, 00:21:19.317 "num_base_bdevs": 2, 00:21:19.317 "num_base_bdevs_discovered": 2, 00:21:19.317 "num_base_bdevs_operational": 2, 00:21:19.317 "base_bdevs_list": [ 00:21:19.317 { 00:21:19.317 "name": "spare", 00:21:19.317 "uuid": "ebb47ef9-2e24-5d57-b17c-dd119694c9e0", 00:21:19.317 "is_configured": true, 00:21:19.317 "data_offset": 0, 00:21:19.317 "data_size": 65536 00:21:19.317 }, 00:21:19.317 { 00:21:19.317 "name": "BaseBdev2", 00:21:19.317 "uuid": "c187e3c0-5d19-4c37-af9c-52e6bb054b73", 00:21:19.317 "is_configured": true, 00:21:19.317 "data_offset": 0, 00:21:19.317 "data_size": 65536 00:21:19.317 } 00:21:19.317 ] 00:21:19.317 }' 00:21:19.317 17:00:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:19.317 17:00:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:19.317 17:00:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:19.591 17:00:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:19.591 17:00:08 -- bdev/bdev_raid.sh@660 -- # break 00:21:19.591 17:00:08 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:19.591 17:00:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:19.591 17:00:08 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:19.591 17:00:08 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:19.591 17:00:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:19.591 17:00:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.591 17:00:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.591 17:00:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:19.591 "name": "raid_bdev1", 00:21:19.591 "uuid": "0c232a94-f39b-4de0-b138-32c58fc0e12a", 00:21:19.591 "strip_size_kb": 0, 00:21:19.591 "state": "online", 00:21:19.591 "raid_level": "raid1", 00:21:19.591 "superblock": false, 00:21:19.591 "num_base_bdevs": 2, 00:21:19.591 "num_base_bdevs_discovered": 2, 00:21:19.591 "num_base_bdevs_operational": 2, 00:21:19.591 "base_bdevs_list": [ 00:21:19.591 { 00:21:19.591 "name": "spare", 00:21:19.591 "uuid": "ebb47ef9-2e24-5d57-b17c-dd119694c9e0", 00:21:19.591 "is_configured": true, 00:21:19.591 "data_offset": 0, 00:21:19.591 "data_size": 65536 00:21:19.591 }, 00:21:19.591 { 00:21:19.591 "name": "BaseBdev2", 00:21:19.591 "uuid": "c187e3c0-5d19-4c37-af9c-52e6bb054b73", 00:21:19.591 "is_configured": true, 00:21:19.591 "data_offset": 0, 00:21:19.591 "data_size": 65536 00:21:19.591 } 00:21:19.591 ] 00:21:19.591 }' 00:21:19.591 17:00:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:19.864 17:00:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:19.864 17:00:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:19.864 17:00:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:19.864 17:00:08 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:19.864 17:00:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:19.864 17:00:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:19.864 17:00:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:19.864 17:00:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:19.864 17:00:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:19.864 17:00:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:19.864 17:00:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:19.864 17:00:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:19.864 17:00:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:19.864 17:00:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.864 17:00:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.122 17:00:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:20.123 "name": "raid_bdev1", 00:21:20.123 "uuid": "0c232a94-f39b-4de0-b138-32c58fc0e12a", 00:21:20.123 "strip_size_kb": 0, 00:21:20.123 "state": "online", 00:21:20.123 "raid_level": "raid1", 00:21:20.123 "superblock": false, 00:21:20.123 "num_base_bdevs": 2, 00:21:20.123 "num_base_bdevs_discovered": 2, 00:21:20.123 "num_base_bdevs_operational": 2, 00:21:20.123 "base_bdevs_list": [ 00:21:20.123 { 00:21:20.123 "name": "spare", 00:21:20.123 "uuid": "ebb47ef9-2e24-5d57-b17c-dd119694c9e0", 00:21:20.123 "is_configured": true, 00:21:20.123 "data_offset": 0, 00:21:20.123 "data_size": 65536 00:21:20.123 }, 00:21:20.123 { 00:21:20.123 "name": "BaseBdev2", 00:21:20.123 "uuid": "c187e3c0-5d19-4c37-af9c-52e6bb054b73", 00:21:20.123 "is_configured": true, 00:21:20.123 "data_offset": 0, 00:21:20.123 "data_size": 65536 00:21:20.123 } 00:21:20.123 ] 00:21:20.123 }' 00:21:20.123 17:00:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:20.123 17:00:08 -- common/autotest_common.sh@10 -- # set +x 00:21:20.691 17:00:09 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:20.953 [2024-11-05 17:00:09.605219] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:20.953 [2024-11-05 17:00:09.605448] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:20.953 00:21:20.953 Latency(us) 00:21:20.953 [2024-11-05T17:00:09.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.953 [2024-11-05T17:00:09.830Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:20.953 raid_bdev1 : 11.86 109.38 328.15 0.00 0.00 12468.26 271.83 113913.48 00:21:20.953 [2024-11-05T17:00:09.830Z] =================================================================================================================== 00:21:20.953 [2024-11-05T17:00:09.830Z] Total : 109.38 328.15 0.00 0.00 12468.26 271.83 113913.48 00:21:20.953 [2024-11-05 17:00:09.632523] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.953 [2024-11-05 17:00:09.632694] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:20.953 [2024-11-05 17:00:09.632802] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:20.953 0 00:21:20.953 [2024-11-05 17:00:09.633063] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:21:20.953 17:00:09 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.953 17:00:09 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:21.212 17:00:09 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:21.212 17:00:09 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:21.212 17:00:09 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:21.212 17:00:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:21.212 17:00:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:21.212 17:00:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:21.212 17:00:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:21.212 17:00:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:21.213 17:00:09 -- bdev/nbd_common.sh@12 -- # local i 00:21:21.213 17:00:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:21.213 17:00:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:21.213 17:00:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:21.471 /dev/nbd0 00:21:21.471 17:00:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:21.471 17:00:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:21.471 17:00:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:21.471 17:00:10 -- common/autotest_common.sh@867 -- # local i 00:21:21.471 17:00:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:21.471 17:00:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:21.471 17:00:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:21.471 17:00:10 -- common/autotest_common.sh@871 -- # break 00:21:21.471 17:00:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:21.471 17:00:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:21.471 17:00:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:21.471 1+0 records in 00:21:21.471 1+0 records out 00:21:21.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607396 s, 6.7 MB/s 00:21:21.471 17:00:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.471 17:00:10 -- common/autotest_common.sh@884 -- # size=4096 00:21:21.471 17:00:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.471 17:00:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:21.471 17:00:10 -- common/autotest_common.sh@887 -- # return 0 00:21:21.471 17:00:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:21.471 17:00:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:21.471 17:00:10 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:21.471 17:00:10 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:21.471 17:00:10 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:21.471 17:00:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:21.471 17:00:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:21:21.471 17:00:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:21.471 17:00:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:21.471 17:00:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:21.471 17:00:10 -- bdev/nbd_common.sh@12 -- # local i 00:21:21.471 17:00:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:21.471 17:00:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:21.471 17:00:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:21.731 /dev/nbd1 00:21:21.731 17:00:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:21.731 17:00:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:21.731 17:00:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:21.731 17:00:10 -- common/autotest_common.sh@867 -- # local i 00:21:21.731 17:00:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:21.731 17:00:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:21.731 17:00:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:21.731 17:00:10 -- common/autotest_common.sh@871 -- # break 00:21:21.731 17:00:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:21.731 17:00:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:21.731 17:00:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:21.731 1+0 records in 00:21:21.731 1+0 records out 00:21:21.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515365 s, 7.9 MB/s 00:21:21.731 17:00:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.731 17:00:10 -- common/autotest_common.sh@884 -- # size=4096 00:21:21.731 17:00:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.731 17:00:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:21.731 17:00:10 -- common/autotest_common.sh@887 -- # return 0 00:21:21.731 17:00:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:21.731 17:00:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:21.731 17:00:10 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:21.731 17:00:10 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:21.731 17:00:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:21.731 17:00:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:21.731 17:00:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:21.731 17:00:10 -- bdev/nbd_common.sh@51 -- # local i 00:21:21.731 17:00:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:21.731 17:00:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:21.989 17:00:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:21.989 17:00:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:21.989 17:00:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:21.989 17:00:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:21.989 17:00:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:21.989 17:00:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:22.248 17:00:10 -- bdev/nbd_common.sh@41 -- # break 00:21:22.248 17:00:10 -- bdev/nbd_common.sh@45 -- # return 0 00:21:22.248 17:00:10 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:22.248 17:00:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:22.248 17:00:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:22.248 17:00:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:22.248 17:00:10 -- bdev/nbd_common.sh@51 -- # local i 00:21:22.248 17:00:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:22.248 17:00:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:22.248 17:00:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:22.248 17:00:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:22.248 17:00:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:22.248 17:00:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:22.248 17:00:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:22.248 17:00:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:22.248 17:00:11 -- bdev/nbd_common.sh@41 -- # break 00:21:22.248 17:00:11 -- bdev/nbd_common.sh@45 -- # return 0 00:21:22.248 17:00:11 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:22.248 17:00:11 -- bdev/bdev_raid.sh@709 -- # killprocess 123606 00:21:22.248 17:00:11 -- common/autotest_common.sh@936 -- # '[' -z 123606 ']' 00:21:22.248 17:00:11 -- common/autotest_common.sh@940 -- # kill -0 123606 00:21:22.248 17:00:11 -- common/autotest_common.sh@941 -- # uname 00:21:22.248 17:00:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:22.248 17:00:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123606 00:21:22.248 17:00:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:22.248 17:00:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:22.248 17:00:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123606' 00:21:22.248 killing process with pid 123606 00:21:22.248 17:00:11 -- common/autotest_common.sh@955 -- # kill 123606 00:21:22.248 Received shutdown signal, test time was about 13.359683 seconds 00:21:22.248 00:21:22.248 Latency(us) 00:21:22.248 [2024-11-05T17:00:11.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.248 [2024-11-05T17:00:11.125Z] =================================================================================================================== 00:21:22.248 [2024-11-05T17:00:11.125Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.248 17:00:11 -- common/autotest_common.sh@960 -- # wait 123606 00:21:22.248 [2024-11-05 17:00:11.119381] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:22.507 [2024-11-05 17:00:11.269915] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:23.478 ************************************ 00:21:23.478 END TEST raid_rebuild_test_io 00:21:23.478 ************************************ 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:23.478 00:21:23.478 real 0m18.643s 00:21:23.478 user 0m28.584s 00:21:23.478 sys 0m1.976s 00:21:23.478 17:00:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:23.478 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:21:23.478 17:00:12 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:21:23.478 17:00:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:23.478 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:21:23.478 ************************************ 00:21:23.478 START TEST raid_rebuild_test_sb_io 00:21:23.478 ************************************ 00:21:23.478 17:00:12 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true true 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@544 -- # raid_pid=124099 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@545 -- # waitforlisten 124099 /var/tmp/spdk-raid.sock 00:21:23.478 17:00:12 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:23.478 17:00:12 -- common/autotest_common.sh@829 -- # '[' -z 124099 ']' 00:21:23.478 17:00:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:23.478 17:00:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.478 17:00:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:23.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:23.478 17:00:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.478 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:21:23.478 [2024-11-05 17:00:12.364490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:23.478 [2024-11-05 17:00:12.365456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124099 ] 00:21:23.478 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:23.478 Zero copy mechanism will not be used. 00:21:23.737 [2024-11-05 17:00:12.529378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.996 [2024-11-05 17:00:12.683689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.996 [2024-11-05 17:00:12.848209] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:24.563 17:00:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.563 17:00:13 -- common/autotest_common.sh@862 -- # return 0 00:21:24.563 17:00:13 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:24.563 17:00:13 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:24.563 17:00:13 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:24.822 BaseBdev1_malloc 00:21:24.822 17:00:13 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:24.822 [2024-11-05 17:00:13.682212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:24.822 [2024-11-05 17:00:13.682442] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.822 [2024-11-05 17:00:13.682582] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:21:24.822 [2024-11-05 17:00:13.682722] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.822 [2024-11-05 17:00:13.685057] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.822 [2024-11-05 17:00:13.685240] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:24.822 BaseBdev1 00:21:24.822 17:00:13 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:24.822 17:00:13 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:24.822 17:00:13 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:25.389 BaseBdev2_malloc 00:21:25.389 17:00:13 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:25.389 [2024-11-05 17:00:14.165615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:25.389 [2024-11-05 17:00:14.165836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.389 [2024-11-05 17:00:14.165922] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:25.389 [2024-11-05 17:00:14.166180] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.389 [2024-11-05 17:00:14.168507] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.389 [2024-11-05 17:00:14.168677] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:25.389 BaseBdev2 00:21:25.389 17:00:14 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:25.648 spare_malloc 00:21:25.648 17:00:14 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:25.906 spare_delay 00:21:25.906 17:00:14 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:26.166 [2024-11-05 17:00:14.815920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:26.166 [2024-11-05 17:00:14.816137] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.166 [2024-11-05 17:00:14.816214] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:21:26.166 [2024-11-05 17:00:14.816401] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.166 [2024-11-05 17:00:14.818747] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.166 [2024-11-05 17:00:14.818992] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:26.166 spare 00:21:26.166 17:00:14 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:26.166 [2024-11-05 17:00:14.999985] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:26.166 [2024-11-05 17:00:15.001873] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:26.166 [2024-11-05 17:00:15.002185] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:21:26.166 [2024-11-05 17:00:15.002305] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:26.166 [2024-11-05 17:00:15.002459] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:26.166 [2024-11-05 17:00:15.002965] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:21:26.166 [2024-11-05 17:00:15.003098] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:21:26.166 [2024-11-05 17:00:15.003339] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.166 17:00:15 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:26.166 17:00:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:26.166 17:00:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:26.166 17:00:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:26.166 17:00:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:26.166 17:00:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:26.166 17:00:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:26.166 17:00:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:26.166 17:00:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:26.166 17:00:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:26.166 17:00:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.166 17:00:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.425 17:00:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:26.425 "name": "raid_bdev1", 00:21:26.425 "uuid": "be2703d2-421e-424d-9fd6-fb14f029df99", 00:21:26.425 "strip_size_kb": 0, 00:21:26.425 "state": "online", 00:21:26.425 "raid_level": "raid1", 00:21:26.425 "superblock": true, 00:21:26.425 "num_base_bdevs": 2, 00:21:26.425 "num_base_bdevs_discovered": 2, 00:21:26.425 "num_base_bdevs_operational": 2, 00:21:26.425 "base_bdevs_list": [ 00:21:26.425 { 00:21:26.425 "name": "BaseBdev1", 00:21:26.425 "uuid": "1a0b66c7-8c0d-59ed-a8c4-0f56229f99dc", 00:21:26.425 "is_configured": true, 00:21:26.425 "data_offset": 2048, 00:21:26.425 "data_size": 63488 00:21:26.425 }, 00:21:26.425 { 00:21:26.425 "name": "BaseBdev2", 00:21:26.425 "uuid": "961403f8-c15b-5000-ac33-4516e757b4b6", 00:21:26.425 "is_configured": true, 00:21:26.425 "data_offset": 2048, 00:21:26.425 "data_size": 63488 00:21:26.425 } 00:21:26.425 ] 00:21:26.425 }' 00:21:26.425 17:00:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:26.425 17:00:15 -- common/autotest_common.sh@10 -- # set +x 00:21:26.993 17:00:15 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:26.993 17:00:15 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:27.251 [2024-11-05 17:00:16.068407] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:27.251 17:00:16 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:27.251 17:00:16 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.251 17:00:16 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:27.510 17:00:16 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:27.510 17:00:16 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:27.510 17:00:16 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:27.510 17:00:16 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:27.511 [2024-11-05 17:00:16.351589] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:27.511 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:27.511 Zero copy mechanism will not be used. 00:21:27.511 Running I/O for 60 seconds... 00:21:27.769 [2024-11-05 17:00:16.525227] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:27.769 [2024-11-05 17:00:16.525736] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:21:27.769 17:00:16 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:27.769 17:00:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:27.769 17:00:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:27.769 17:00:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:27.769 17:00:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:27.769 17:00:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:27.769 17:00:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:27.769 17:00:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:27.769 17:00:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:27.769 17:00:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:27.769 17:00:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.769 17:00:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.028 17:00:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:28.028 "name": "raid_bdev1", 00:21:28.028 "uuid": "be2703d2-421e-424d-9fd6-fb14f029df99", 00:21:28.028 "strip_size_kb": 0, 00:21:28.028 "state": "online", 00:21:28.028 "raid_level": "raid1", 00:21:28.028 "superblock": true, 00:21:28.028 "num_base_bdevs": 2, 00:21:28.028 "num_base_bdevs_discovered": 1, 00:21:28.028 "num_base_bdevs_operational": 1, 00:21:28.028 "base_bdevs_list": [ 00:21:28.028 { 00:21:28.028 "name": null, 00:21:28.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.028 "is_configured": false, 00:21:28.028 "data_offset": 2048, 00:21:28.028 "data_size": 63488 00:21:28.028 }, 00:21:28.028 { 00:21:28.028 "name": "BaseBdev2", 00:21:28.028 "uuid": "961403f8-c15b-5000-ac33-4516e757b4b6", 00:21:28.028 "is_configured": true, 00:21:28.028 "data_offset": 2048, 00:21:28.028 "data_size": 63488 00:21:28.028 } 00:21:28.028 ] 00:21:28.028 }' 00:21:28.028 17:00:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:28.028 17:00:16 -- common/autotest_common.sh@10 -- # set +x 00:21:28.595 17:00:17 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:28.922 [2024-11-05 17:00:17.577172] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:28.922 [2024-11-05 17:00:17.577486] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:28.922 17:00:17 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:28.922 [2024-11-05 17:00:17.628768] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:28.922 [2024-11-05 17:00:17.630676] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:28.922 [2024-11-05 17:00:17.744703] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:28.922 [2024-11-05 17:00:17.745135] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:29.205 [2024-11-05 17:00:17.978859] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:29.205 [2024-11-05 17:00:17.979310] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:29.773 [2024-11-05 17:00:18.444681] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:29.773 17:00:18 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:29.773 17:00:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:29.773 17:00:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:29.773 17:00:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:29.773 17:00:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:29.773 17:00:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.773 17:00:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.031 [2024-11-05 17:00:18.673853] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:30.031 [2024-11-05 17:00:18.794838] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:30.031 17:00:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:30.031 "name": "raid_bdev1", 00:21:30.031 "uuid": "be2703d2-421e-424d-9fd6-fb14f029df99", 00:21:30.031 "strip_size_kb": 0, 00:21:30.031 "state": "online", 00:21:30.031 "raid_level": "raid1", 00:21:30.031 "superblock": true, 00:21:30.031 "num_base_bdevs": 2, 00:21:30.031 "num_base_bdevs_discovered": 2, 00:21:30.031 "num_base_bdevs_operational": 2, 00:21:30.031 "process": { 00:21:30.031 "type": "rebuild", 00:21:30.031 "target": "spare", 00:21:30.031 "progress": { 00:21:30.031 "blocks": 16384, 00:21:30.031 "percent": 25 00:21:30.031 } 00:21:30.031 }, 00:21:30.031 "base_bdevs_list": [ 00:21:30.031 { 00:21:30.031 "name": "spare", 00:21:30.031 "uuid": "d11eab70-f586-5b49-a3e3-93ae98575c62", 00:21:30.031 "is_configured": true, 00:21:30.031 "data_offset": 2048, 00:21:30.031 "data_size": 63488 00:21:30.031 }, 00:21:30.031 { 00:21:30.031 "name": "BaseBdev2", 00:21:30.031 "uuid": "961403f8-c15b-5000-ac33-4516e757b4b6", 00:21:30.031 "is_configured": true, 00:21:30.032 "data_offset": 2048, 00:21:30.032 "data_size": 63488 00:21:30.032 } 00:21:30.032 ] 00:21:30.032 }' 00:21:30.032 17:00:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:30.032 17:00:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:30.032 17:00:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:30.290 17:00:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:30.290 17:00:18 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:30.548 [2024-11-05 17:00:19.202428] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:30.548 [2024-11-05 17:00:19.211597] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:30.548 [2024-11-05 17:00:19.312793] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:30.548 [2024-11-05 17:00:19.320391] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.548 [2024-11-05 17:00:19.351385] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:21:30.548 17:00:19 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:30.548 17:00:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:30.548 17:00:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:30.548 17:00:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:30.548 17:00:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:30.548 17:00:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:30.548 17:00:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:30.548 17:00:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:30.548 17:00:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:30.548 17:00:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:30.548 17:00:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.548 17:00:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.806 17:00:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:30.806 "name": "raid_bdev1", 00:21:30.806 "uuid": "be2703d2-421e-424d-9fd6-fb14f029df99", 00:21:30.806 "strip_size_kb": 0, 00:21:30.806 "state": "online", 00:21:30.806 "raid_level": "raid1", 00:21:30.806 "superblock": true, 00:21:30.806 "num_base_bdevs": 2, 00:21:30.806 "num_base_bdevs_discovered": 1, 00:21:30.806 "num_base_bdevs_operational": 1, 00:21:30.806 "base_bdevs_list": [ 00:21:30.806 { 00:21:30.806 "name": null, 00:21:30.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.806 "is_configured": false, 00:21:30.806 "data_offset": 2048, 00:21:30.806 "data_size": 63488 00:21:30.806 }, 00:21:30.807 { 00:21:30.807 "name": "BaseBdev2", 00:21:30.807 "uuid": "961403f8-c15b-5000-ac33-4516e757b4b6", 00:21:30.807 "is_configured": true, 00:21:30.807 "data_offset": 2048, 00:21:30.807 "data_size": 63488 00:21:30.807 } 00:21:30.807 ] 00:21:30.807 }' 00:21:30.807 17:00:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:30.807 17:00:19 -- common/autotest_common.sh@10 -- # set +x 00:21:31.466 17:00:20 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:31.466 17:00:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:31.466 17:00:20 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:31.466 17:00:20 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:31.466 17:00:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:31.466 17:00:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.466 17:00:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.724 17:00:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:31.724 "name": "raid_bdev1", 00:21:31.724 "uuid": "be2703d2-421e-424d-9fd6-fb14f029df99", 00:21:31.724 "strip_size_kb": 0, 00:21:31.724 "state": "online", 00:21:31.724 "raid_level": "raid1", 00:21:31.724 "superblock": true, 00:21:31.724 "num_base_bdevs": 2, 00:21:31.724 "num_base_bdevs_discovered": 1, 00:21:31.724 "num_base_bdevs_operational": 1, 00:21:31.724 "base_bdevs_list": [ 00:21:31.724 { 00:21:31.724 "name": null, 00:21:31.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.724 "is_configured": false, 00:21:31.724 "data_offset": 2048, 00:21:31.724 "data_size": 63488 00:21:31.724 }, 00:21:31.724 { 00:21:31.724 "name": "BaseBdev2", 00:21:31.724 "uuid": "961403f8-c15b-5000-ac33-4516e757b4b6", 00:21:31.724 "is_configured": true, 00:21:31.724 "data_offset": 2048, 00:21:31.724 "data_size": 63488 00:21:31.724 } 00:21:31.724 ] 00:21:31.724 }' 00:21:31.724 17:00:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:31.724 17:00:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:31.724 17:00:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:31.983 17:00:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:31.983 17:00:20 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:31.983 [2024-11-05 17:00:20.866806] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:31.983 [2024-11-05 17:00:20.867130] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:32.241 [2024-11-05 17:00:20.900581] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:21:32.242 [2024-11-05 17:00:20.902726] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:32.242 17:00:20 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:32.242 [2024-11-05 17:00:21.011202] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:32.242 [2024-11-05 17:00:21.011794] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:32.500 [2024-11-05 17:00:21.237377] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:32.500 [2024-11-05 17:00:21.237658] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:32.759 [2024-11-05 17:00:21.585497] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:32.759 [2024-11-05 17:00:21.586062] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:33.017 [2024-11-05 17:00:21.701469] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:33.017 [2024-11-05 17:00:21.701768] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:33.017 17:00:21 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.017 17:00:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:33.017 17:00:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:33.017 17:00:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:33.017 17:00:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:33.017 17:00:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.017 17:00:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.276 [2024-11-05 17:00:22.023407] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:33.276 17:00:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:33.276 "name": "raid_bdev1", 00:21:33.276 "uuid": "be2703d2-421e-424d-9fd6-fb14f029df99", 00:21:33.276 "strip_size_kb": 0, 00:21:33.276 "state": "online", 00:21:33.276 "raid_level": "raid1", 00:21:33.276 "superblock": true, 00:21:33.276 "num_base_bdevs": 2, 00:21:33.276 "num_base_bdevs_discovered": 2, 00:21:33.276 "num_base_bdevs_operational": 2, 00:21:33.276 "process": { 00:21:33.276 "type": "rebuild", 00:21:33.276 "target": "spare", 00:21:33.276 "progress": { 00:21:33.276 "blocks": 14336, 00:21:33.276 "percent": 22 00:21:33.276 } 00:21:33.276 }, 00:21:33.276 "base_bdevs_list": [ 00:21:33.276 { 00:21:33.276 "name": "spare", 00:21:33.276 "uuid": "d11eab70-f586-5b49-a3e3-93ae98575c62", 00:21:33.276 "is_configured": true, 00:21:33.276 "data_offset": 2048, 00:21:33.276 "data_size": 63488 00:21:33.276 }, 00:21:33.276 { 00:21:33.276 "name": "BaseBdev2", 00:21:33.276 "uuid": "961403f8-c15b-5000-ac33-4516e757b4b6", 00:21:33.276 "is_configured": true, 00:21:33.276 "data_offset": 2048, 00:21:33.276 "data_size": 63488 00:21:33.276 } 00:21:33.276 ] 00:21:33.276 }' 00:21:33.276 17:00:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:33.535 17:00:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.535 17:00:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:33.535 17:00:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.535 17:00:22 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:33.535 17:00:22 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:33.535 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:33.535 17:00:22 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:33.535 17:00:22 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:33.535 17:00:22 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:33.535 17:00:22 -- bdev/bdev_raid.sh@657 -- # local timeout=462 00:21:33.535 17:00:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:33.535 17:00:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.535 17:00:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:33.535 17:00:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:33.535 17:00:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:33.535 17:00:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:33.535 17:00:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.535 17:00:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.535 [2024-11-05 17:00:22.359194] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:33.793 17:00:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:33.793 "name": "raid_bdev1", 00:21:33.793 "uuid": "be2703d2-421e-424d-9fd6-fb14f029df99", 00:21:33.793 "strip_size_kb": 0, 00:21:33.793 "state": "online", 00:21:33.793 "raid_level": "raid1", 00:21:33.793 "superblock": true, 00:21:33.793 "num_base_bdevs": 2, 00:21:33.793 "num_base_bdevs_discovered": 2, 00:21:33.793 "num_base_bdevs_operational": 2, 00:21:33.793 "process": { 00:21:33.793 "type": "rebuild", 00:21:33.793 "target": "spare", 00:21:33.793 "progress": { 00:21:33.793 "blocks": 20480, 00:21:33.793 "percent": 32 00:21:33.793 } 00:21:33.793 }, 00:21:33.793 "base_bdevs_list": [ 00:21:33.793 { 00:21:33.793 "name": "spare", 00:21:33.793 "uuid": "d11eab70-f586-5b49-a3e3-93ae98575c62", 00:21:33.793 "is_configured": true, 00:21:33.793 "data_offset": 2048, 00:21:33.793 "data_size": 63488 00:21:33.793 }, 00:21:33.793 { 00:21:33.793 "name": "BaseBdev2", 00:21:33.793 "uuid": "961403f8-c15b-5000-ac33-4516e757b4b6", 00:21:33.793 "is_configured": true, 00:21:33.793 "data_offset": 2048, 00:21:33.793 "data_size": 63488 00:21:33.793 } 00:21:33.793 ] 00:21:33.793 }' 00:21:33.793 17:00:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:33.793 [2024-11-05 17:00:22.467847] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:33.793 17:00:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.793 17:00:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:33.793 17:00:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.793 17:00:22 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:34.052 [2024-11-05 17:00:22.803893] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:34.619 [2024-11-05 17:00:23.495390] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:34.877 17:00:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:34.877 17:00:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:34.877 17:00:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:34.877 17:00:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:34.877 17:00:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:34.877 17:00:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:34.877 17:00:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.877 17:00:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.877 [2024-11-05 17:00:23.717099] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:35.136 17:00:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:35.136 "name": "raid_bdev1", 00:21:35.136 "uuid": "be2703d2-421e-424d-9fd6-fb14f029df99", 00:21:35.136 "strip_size_kb": 0, 00:21:35.136 "state": "online", 00:21:35.136 "raid_level": "raid1", 00:21:35.136 "superblock": true, 00:21:35.136 "num_base_bdevs": 2, 00:21:35.136 "num_base_bdevs_discovered": 2, 00:21:35.136 "num_base_bdevs_operational": 2, 00:21:35.136 "process": { 00:21:35.136 "type": "rebuild", 00:21:35.136 "target": "spare", 00:21:35.136 "progress": { 00:21:35.136 "blocks": 40960, 00:21:35.136 "percent": 64 00:21:35.136 } 00:21:35.136 }, 00:21:35.136 "base_bdevs_list": [ 00:21:35.136 { 00:21:35.136 "name": "spare", 00:21:35.136 "uuid": "d11eab70-f586-5b49-a3e3-93ae98575c62", 00:21:35.136 "is_configured": true, 00:21:35.136 "data_offset": 2048, 00:21:35.136 "data_size": 63488 00:21:35.136 }, 00:21:35.136 { 00:21:35.136 "name": "BaseBdev2", 00:21:35.136 "uuid": "961403f8-c15b-5000-ac33-4516e757b4b6", 00:21:35.136 "is_configured": true, 00:21:35.136 "data_offset": 2048, 00:21:35.136 "data_size": 63488 00:21:35.136 } 00:21:35.136 ] 00:21:35.136 }' 00:21:35.136 17:00:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:35.136 17:00:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:35.136 17:00:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:35.136 17:00:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:35.136 17:00:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:36.072 [2024-11-05 17:00:24.800839] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:36.072 [2024-11-05 17:00:24.801325] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:36.072 17:00:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:36.072 17:00:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:36.072 17:00:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:36.072 17:00:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:36.072 17:00:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:36.072 17:00:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:36.072 17:00:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.072 17:00:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.330 [2024-11-05 17:00:25.026229] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:36.330 [2024-11-05 17:00:25.132097] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:36.330 17:00:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:36.330 "name": "raid_bdev1", 00:21:36.330 "uuid": "be2703d2-421e-424d-9fd6-fb14f029df99", 00:21:36.330 "strip_size_kb": 0, 00:21:36.330 "state": "online", 00:21:36.330 "raid_level": "raid1", 00:21:36.330 "superblock": true, 00:21:36.330 "num_base_bdevs": 2, 00:21:36.330 "num_base_bdevs_discovered": 2, 00:21:36.330 "num_base_bdevs_operational": 2, 00:21:36.330 "process": { 00:21:36.330 "type": "rebuild", 00:21:36.330 "target": "spare", 00:21:36.330 "progress": { 00:21:36.330 "blocks": 63488, 00:21:36.330 "percent": 100 00:21:36.330 } 00:21:36.330 }, 00:21:36.330 "base_bdevs_list": [ 00:21:36.330 { 00:21:36.330 "name": "spare", 00:21:36.330 "uuid": "d11eab70-f586-5b49-a3e3-93ae98575c62", 00:21:36.330 "is_configured": true, 00:21:36.330 "data_offset": 2048, 00:21:36.330 "data_size": 63488 00:21:36.330 }, 00:21:36.330 { 00:21:36.330 "name": "BaseBdev2", 00:21:36.330 "uuid": "961403f8-c15b-5000-ac33-4516e757b4b6", 00:21:36.330 "is_configured": true, 00:21:36.330 "data_offset": 2048, 00:21:36.330 "data_size": 63488 00:21:36.330 } 00:21:36.330 ] 00:21:36.330 }' 00:21:36.330 [2024-11-05 17:00:25.134752] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:36.330 17:00:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:36.330 17:00:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:36.330 17:00:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:36.589 17:00:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:36.589 17:00:25 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:37.524 17:00:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:37.524 17:00:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:37.524 17:00:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:37.524 17:00:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:37.524 17:00:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:37.524 17:00:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:37.524 17:00:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.524 17:00:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.782 17:00:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:37.782 "name": "raid_bdev1", 00:21:37.782 "uuid": "be2703d2-421e-424d-9fd6-fb14f029df99", 00:21:37.782 "strip_size_kb": 0, 00:21:37.782 "state": "online", 00:21:37.782 "raid_level": "raid1", 00:21:37.782 "superblock": true, 00:21:37.782 "num_base_bdevs": 2, 00:21:37.782 "num_base_bdevs_discovered": 2, 00:21:37.782 "num_base_bdevs_operational": 2, 00:21:37.782 "base_bdevs_list": [ 00:21:37.782 { 00:21:37.782 "name": "spare", 00:21:37.782 "uuid": "d11eab70-f586-5b49-a3e3-93ae98575c62", 00:21:37.782 "is_configured": true, 00:21:37.782 "data_offset": 2048, 00:21:37.782 "data_size": 63488 00:21:37.782 }, 00:21:37.782 { 00:21:37.782 "name": "BaseBdev2", 00:21:37.782 "uuid": "961403f8-c15b-5000-ac33-4516e757b4b6", 00:21:37.782 "is_configured": true, 00:21:37.782 "data_offset": 2048, 00:21:37.782 "data_size": 63488 00:21:37.782 } 00:21:37.782 ] 00:21:37.782 }' 00:21:37.782 17:00:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:37.782 17:00:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:37.782 17:00:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:37.782 17:00:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:37.782 17:00:26 -- bdev/bdev_raid.sh@660 -- # break 00:21:37.782 17:00:26 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:37.782 17:00:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:37.782 17:00:26 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:37.782 17:00:26 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:37.782 17:00:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:37.782 17:00:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.782 17:00:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.041 17:00:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:38.041 "name": "raid_bdev1", 00:21:38.041 "uuid": "be2703d2-421e-424d-9fd6-fb14f029df99", 00:21:38.041 "strip_size_kb": 0, 00:21:38.041 "state": "online", 00:21:38.041 "raid_level": "raid1", 00:21:38.041 "superblock": true, 00:21:38.041 "num_base_bdevs": 2, 00:21:38.041 "num_base_bdevs_discovered": 2, 00:21:38.041 "num_base_bdevs_operational": 2, 00:21:38.041 "base_bdevs_list": [ 00:21:38.041 { 00:21:38.041 "name": "spare", 00:21:38.041 "uuid": "d11eab70-f586-5b49-a3e3-93ae98575c62", 00:21:38.041 "is_configured": true, 00:21:38.041 "data_offset": 2048, 00:21:38.041 "data_size": 63488 00:21:38.041 }, 00:21:38.041 { 00:21:38.041 "name": "BaseBdev2", 00:21:38.041 "uuid": "961403f8-c15b-5000-ac33-4516e757b4b6", 00:21:38.041 "is_configured": true, 00:21:38.041 "data_offset": 2048, 00:21:38.041 "data_size": 63488 00:21:38.041 } 00:21:38.041 ] 00:21:38.041 }' 00:21:38.041 17:00:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:38.041 17:00:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:38.041 17:00:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:38.041 17:00:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:38.041 17:00:26 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:38.041 17:00:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:38.041 17:00:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:38.041 17:00:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:38.041 17:00:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:38.041 17:00:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:38.041 17:00:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:38.041 17:00:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:38.041 17:00:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:38.041 17:00:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:38.041 17:00:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.041 17:00:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.300 17:00:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:38.300 "name": "raid_bdev1", 00:21:38.300 "uuid": "be2703d2-421e-424d-9fd6-fb14f029df99", 00:21:38.300 "strip_size_kb": 0, 00:21:38.300 "state": "online", 00:21:38.300 "raid_level": "raid1", 00:21:38.300 "superblock": true, 00:21:38.300 "num_base_bdevs": 2, 00:21:38.300 "num_base_bdevs_discovered": 2, 00:21:38.300 "num_base_bdevs_operational": 2, 00:21:38.300 "base_bdevs_list": [ 00:21:38.300 { 00:21:38.300 "name": "spare", 00:21:38.300 "uuid": "d11eab70-f586-5b49-a3e3-93ae98575c62", 00:21:38.300 "is_configured": true, 00:21:38.300 "data_offset": 2048, 00:21:38.300 "data_size": 63488 00:21:38.300 }, 00:21:38.300 { 00:21:38.300 "name": "BaseBdev2", 00:21:38.300 "uuid": "961403f8-c15b-5000-ac33-4516e757b4b6", 00:21:38.300 "is_configured": true, 00:21:38.300 "data_offset": 2048, 00:21:38.300 "data_size": 63488 00:21:38.300 } 00:21:38.300 ] 00:21:38.300 }' 00:21:38.300 17:00:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:38.300 17:00:27 -- common/autotest_common.sh@10 -- # set +x 00:21:38.867 17:00:27 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:39.125 [2024-11-05 17:00:27.965502] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:39.125 [2024-11-05 17:00:27.965774] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:39.383 00:21:39.383 Latency(us) 00:21:39.383 [2024-11-05T17:00:28.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.383 [2024-11-05T17:00:28.260Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:39.383 raid_bdev1 : 11.69 112.90 338.70 0.00 0.00 12761.59 273.69 113436.86 00:21:39.383 [2024-11-05T17:00:28.261Z] =================================================================================================================== 00:21:39.384 [2024-11-05T17:00:28.261Z] Total : 112.90 338.70 0.00 0.00 12761.59 273.69 113436.86 00:21:39.384 [2024-11-05 17:00:28.060110] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.384 [2024-11-05 17:00:28.060286] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:39.384 [2024-11-05 17:00:28.060405] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:39.384 0 00:21:39.384 [2024-11-05 17:00:28.060619] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:21:39.384 17:00:28 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.384 17:00:28 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:39.642 17:00:28 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:39.642 17:00:28 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:39.642 17:00:28 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:39.642 17:00:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:39.642 17:00:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:39.642 17:00:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:39.642 17:00:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:39.642 17:00:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:39.642 17:00:28 -- bdev/nbd_common.sh@12 -- # local i 00:21:39.642 17:00:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:39.642 17:00:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:39.642 17:00:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:39.901 /dev/nbd0 00:21:39.901 17:00:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:39.901 17:00:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:39.901 17:00:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:39.901 17:00:28 -- common/autotest_common.sh@867 -- # local i 00:21:39.901 17:00:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:39.901 17:00:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:39.901 17:00:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:39.901 17:00:28 -- common/autotest_common.sh@871 -- # break 00:21:39.901 17:00:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:39.901 17:00:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:39.901 17:00:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:39.901 1+0 records in 00:21:39.901 1+0 records out 00:21:39.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419248 s, 9.8 MB/s 00:21:39.901 17:00:28 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:39.901 17:00:28 -- common/autotest_common.sh@884 -- # size=4096 00:21:39.901 17:00:28 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:39.901 17:00:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:39.901 17:00:28 -- common/autotest_common.sh@887 -- # return 0 00:21:39.901 17:00:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:39.901 17:00:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:39.901 17:00:28 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:39.901 17:00:28 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:39.901 17:00:28 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:39.901 17:00:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:39.901 17:00:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:21:39.901 17:00:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:39.901 17:00:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:39.901 17:00:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:39.901 17:00:28 -- bdev/nbd_common.sh@12 -- # local i 00:21:39.901 17:00:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:39.901 17:00:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:39.901 17:00:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:40.159 /dev/nbd1 00:21:40.159 17:00:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:40.159 17:00:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:40.159 17:00:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:40.159 17:00:28 -- common/autotest_common.sh@867 -- # local i 00:21:40.159 17:00:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:40.159 17:00:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:40.159 17:00:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:40.159 17:00:28 -- common/autotest_common.sh@871 -- # break 00:21:40.159 17:00:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:40.159 17:00:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:40.159 17:00:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:40.159 1+0 records in 00:21:40.159 1+0 records out 00:21:40.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534002 s, 7.7 MB/s 00:21:40.159 17:00:28 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:40.159 17:00:28 -- common/autotest_common.sh@884 -- # size=4096 00:21:40.159 17:00:28 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:40.159 17:00:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:40.159 17:00:28 -- common/autotest_common.sh@887 -- # return 0 00:21:40.159 17:00:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:40.159 17:00:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:40.159 17:00:28 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:40.418 17:00:29 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:40.418 17:00:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:40.418 17:00:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:40.418 17:00:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:40.418 17:00:29 -- bdev/nbd_common.sh@51 -- # local i 00:21:40.418 17:00:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:40.418 17:00:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:40.677 17:00:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:40.677 17:00:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:40.677 17:00:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:40.677 17:00:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:40.677 17:00:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:40.677 17:00:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:40.677 17:00:29 -- bdev/nbd_common.sh@41 -- # break 00:21:40.677 17:00:29 -- bdev/nbd_common.sh@45 -- # return 0 00:21:40.677 17:00:29 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:40.677 17:00:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:40.677 17:00:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:40.677 17:00:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:40.677 17:00:29 -- bdev/nbd_common.sh@51 -- # local i 00:21:40.677 17:00:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:40.677 17:00:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:40.939 17:00:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:40.939 17:00:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:40.939 17:00:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:40.939 17:00:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:40.939 17:00:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:40.939 17:00:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:40.939 17:00:29 -- bdev/nbd_common.sh@41 -- # break 00:21:40.939 17:00:29 -- bdev/nbd_common.sh@45 -- # return 0 00:21:40.939 17:00:29 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:40.939 17:00:29 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:40.939 17:00:29 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:40.939 17:00:29 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:41.199 17:00:29 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:41.457 [2024-11-05 17:00:30.177070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:41.457 [2024-11-05 17:00:30.177424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.457 [2024-11-05 17:00:30.177580] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:41.457 [2024-11-05 17:00:30.177703] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.457 [2024-11-05 17:00:30.180086] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.457 [2024-11-05 17:00:30.180295] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:41.457 [2024-11-05 17:00:30.180541] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:41.457 [2024-11-05 17:00:30.180719] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:41.457 BaseBdev1 00:21:41.457 17:00:30 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:41.457 17:00:30 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:41.457 17:00:30 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:41.716 17:00:30 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:41.974 [2024-11-05 17:00:30.665175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:41.974 [2024-11-05 17:00:30.665398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.974 [2024-11-05 17:00:30.665573] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:41.974 [2024-11-05 17:00:30.665706] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.974 [2024-11-05 17:00:30.666250] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.974 [2024-11-05 17:00:30.666444] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:41.974 [2024-11-05 17:00:30.666665] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:41.974 [2024-11-05 17:00:30.666768] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:41.974 [2024-11-05 17:00:30.666857] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:41.974 [2024-11-05 17:00:30.666991] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:21:41.974 [2024-11-05 17:00:30.667150] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:41.974 BaseBdev2 00:21:41.974 17:00:30 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:42.233 17:00:30 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:42.491 [2024-11-05 17:00:31.157365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:42.491 [2024-11-05 17:00:31.157580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.491 [2024-11-05 17:00:31.157726] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:42.491 [2024-11-05 17:00:31.157853] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.491 [2024-11-05 17:00:31.158358] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.491 [2024-11-05 17:00:31.158531] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:42.491 [2024-11-05 17:00:31.158771] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:42.491 [2024-11-05 17:00:31.158942] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:42.491 spare 00:21:42.491 17:00:31 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:42.491 17:00:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:42.491 17:00:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:42.491 17:00:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:42.491 17:00:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:42.491 17:00:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:42.491 17:00:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:42.491 17:00:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:42.491 17:00:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:42.491 17:00:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:42.491 17:00:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.491 17:00:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.491 [2024-11-05 17:00:31.259169] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:21:42.491 [2024-11-05 17:00:31.259314] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:42.491 [2024-11-05 17:00:31.259462] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:21:42.491 [2024-11-05 17:00:31.260032] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:21:42.491 [2024-11-05 17:00:31.260156] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:21:42.491 [2024-11-05 17:00:31.260386] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.750 17:00:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:42.750 "name": "raid_bdev1", 00:21:42.750 "uuid": "be2703d2-421e-424d-9fd6-fb14f029df99", 00:21:42.750 "strip_size_kb": 0, 00:21:42.750 "state": "online", 00:21:42.750 "raid_level": "raid1", 00:21:42.750 "superblock": true, 00:21:42.750 "num_base_bdevs": 2, 00:21:42.750 "num_base_bdevs_discovered": 2, 00:21:42.750 "num_base_bdevs_operational": 2, 00:21:42.750 "base_bdevs_list": [ 00:21:42.750 { 00:21:42.750 "name": "spare", 00:21:42.750 "uuid": "d11eab70-f586-5b49-a3e3-93ae98575c62", 00:21:42.750 "is_configured": true, 00:21:42.750 "data_offset": 2048, 00:21:42.750 "data_size": 63488 00:21:42.750 }, 00:21:42.750 { 00:21:42.750 "name": "BaseBdev2", 00:21:42.750 "uuid": "961403f8-c15b-5000-ac33-4516e757b4b6", 00:21:42.750 "is_configured": true, 00:21:42.750 "data_offset": 2048, 00:21:42.750 "data_size": 63488 00:21:42.750 } 00:21:42.750 ] 00:21:42.750 }' 00:21:42.750 17:00:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:42.750 17:00:31 -- common/autotest_common.sh@10 -- # set +x 00:21:43.316 17:00:31 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:43.316 17:00:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:43.316 17:00:31 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:43.316 17:00:31 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:43.316 17:00:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:43.316 17:00:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.316 17:00:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.575 17:00:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:43.575 "name": "raid_bdev1", 00:21:43.575 "uuid": "be2703d2-421e-424d-9fd6-fb14f029df99", 00:21:43.575 "strip_size_kb": 0, 00:21:43.575 "state": "online", 00:21:43.575 "raid_level": "raid1", 00:21:43.575 "superblock": true, 00:21:43.575 "num_base_bdevs": 2, 00:21:43.575 "num_base_bdevs_discovered": 2, 00:21:43.575 "num_base_bdevs_operational": 2, 00:21:43.575 "base_bdevs_list": [ 00:21:43.575 { 00:21:43.575 "name": "spare", 00:21:43.575 "uuid": "d11eab70-f586-5b49-a3e3-93ae98575c62", 00:21:43.575 "is_configured": true, 00:21:43.575 "data_offset": 2048, 00:21:43.575 "data_size": 63488 00:21:43.575 }, 00:21:43.575 { 00:21:43.575 "name": "BaseBdev2", 00:21:43.575 "uuid": "961403f8-c15b-5000-ac33-4516e757b4b6", 00:21:43.575 "is_configured": true, 00:21:43.575 "data_offset": 2048, 00:21:43.575 "data_size": 63488 00:21:43.575 } 00:21:43.575 ] 00:21:43.575 }' 00:21:43.575 17:00:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:43.575 17:00:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:43.575 17:00:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:43.575 17:00:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:43.575 17:00:32 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.575 17:00:32 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:43.834 17:00:32 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:43.834 17:00:32 -- bdev/bdev_raid.sh@709 -- # killprocess 124099 00:21:43.834 17:00:32 -- common/autotest_common.sh@936 -- # '[' -z 124099 ']' 00:21:43.834 17:00:32 -- common/autotest_common.sh@940 -- # kill -0 124099 00:21:43.834 17:00:32 -- common/autotest_common.sh@941 -- # uname 00:21:43.834 17:00:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:43.834 17:00:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124099 00:21:43.834 killing process with pid 124099 00:21:43.834 Received shutdown signal, test time was about 16.213034 seconds 00:21:43.834 00:21:43.834 Latency(us) 00:21:43.834 [2024-11-05T17:00:32.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.834 [2024-11-05T17:00:32.711Z] =================================================================================================================== 00:21:43.834 [2024-11-05T17:00:32.711Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:43.834 17:00:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:43.834 17:00:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:43.834 17:00:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124099' 00:21:43.834 17:00:32 -- common/autotest_common.sh@955 -- # kill 124099 00:21:43.834 [2024-11-05 17:00:32.566754] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:43.834 17:00:32 -- common/autotest_common.sh@960 -- # wait 124099 00:21:43.834 [2024-11-05 17:00:32.566826] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.834 [2024-11-05 17:00:32.566936] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:43.834 [2024-11-05 17:00:32.566954] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:21:43.834 [2024-11-05 17:00:32.716362] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:45.210 ************************************ 00:21:45.210 END TEST raid_rebuild_test_sb_io 00:21:45.210 ************************************ 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:45.210 00:21:45.210 real 0m21.390s 00:21:45.210 user 0m33.901s 00:21:45.210 sys 0m2.345s 00:21:45.210 17:00:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:45.210 17:00:33 -- common/autotest_common.sh@10 -- # set +x 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:21:45.210 17:00:33 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:21:45.210 17:00:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:45.210 17:00:33 -- common/autotest_common.sh@10 -- # set +x 00:21:45.210 ************************************ 00:21:45.210 START TEST raid_rebuild_test 00:21:45.210 ************************************ 00:21:45.210 17:00:33 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false false 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@544 -- # raid_pid=124668 00:21:45.210 17:00:33 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:45.211 17:00:33 -- bdev/bdev_raid.sh@545 -- # waitforlisten 124668 /var/tmp/spdk-raid.sock 00:21:45.211 17:00:33 -- common/autotest_common.sh@829 -- # '[' -z 124668 ']' 00:21:45.211 17:00:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:45.211 17:00:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:45.211 17:00:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:45.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:45.211 17:00:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:45.211 17:00:33 -- common/autotest_common.sh@10 -- # set +x 00:21:45.211 [2024-11-05 17:00:33.818497] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:45.211 [2024-11-05 17:00:33.818883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124668 ] 00:21:45.211 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:45.211 Zero copy mechanism will not be used. 00:21:45.211 [2024-11-05 17:00:33.980641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.469 [2024-11-05 17:00:34.145915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.469 [2024-11-05 17:00:34.311031] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:46.037 17:00:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.037 17:00:34 -- common/autotest_common.sh@862 -- # return 0 00:21:46.037 17:00:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:46.037 17:00:34 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:46.037 17:00:34 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:46.295 BaseBdev1 00:21:46.295 17:00:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:46.295 17:00:34 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:46.295 17:00:34 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:46.554 BaseBdev2 00:21:46.554 17:00:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:46.554 17:00:35 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:46.554 17:00:35 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:46.812 BaseBdev3 00:21:46.812 17:00:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:46.812 17:00:35 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:46.812 17:00:35 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:47.070 BaseBdev4 00:21:47.070 17:00:35 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:47.329 spare_malloc 00:21:47.329 17:00:36 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:47.329 spare_delay 00:21:47.329 17:00:36 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:47.587 [2024-11-05 17:00:36.366922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:47.587 [2024-11-05 17:00:36.367181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.587 [2024-11-05 17:00:36.367253] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:21:47.587 [2024-11-05 17:00:36.367491] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.587 [2024-11-05 17:00:36.369736] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.587 [2024-11-05 17:00:36.369901] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:47.587 spare 00:21:47.587 17:00:36 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:47.846 [2024-11-05 17:00:36.562986] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:47.846 [2024-11-05 17:00:36.564985] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:47.846 [2024-11-05 17:00:36.565163] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:47.846 [2024-11-05 17:00:36.565309] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:47.846 [2024-11-05 17:00:36.565511] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:21:47.846 [2024-11-05 17:00:36.565625] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:47.846 [2024-11-05 17:00:36.565836] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:47.846 [2024-11-05 17:00:36.566266] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:21:47.846 [2024-11-05 17:00:36.566386] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:21:47.846 [2024-11-05 17:00:36.566636] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:47.846 17:00:36 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:47.846 17:00:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:47.846 17:00:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:47.846 17:00:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:47.846 17:00:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:47.846 17:00:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:47.846 17:00:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:47.846 17:00:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:47.846 17:00:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:47.846 17:00:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:47.846 17:00:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.846 17:00:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.104 17:00:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:48.104 "name": "raid_bdev1", 00:21:48.104 "uuid": "14bedb7a-028f-41b6-916d-86bb5c30a208", 00:21:48.104 "strip_size_kb": 0, 00:21:48.104 "state": "online", 00:21:48.104 "raid_level": "raid1", 00:21:48.104 "superblock": false, 00:21:48.104 "num_base_bdevs": 4, 00:21:48.104 "num_base_bdevs_discovered": 4, 00:21:48.104 "num_base_bdevs_operational": 4, 00:21:48.104 "base_bdevs_list": [ 00:21:48.104 { 00:21:48.104 "name": "BaseBdev1", 00:21:48.104 "uuid": "f2e4744a-bd04-455c-8a5e-af6c9e0bb0a2", 00:21:48.104 "is_configured": true, 00:21:48.104 "data_offset": 0, 00:21:48.104 "data_size": 65536 00:21:48.104 }, 00:21:48.104 { 00:21:48.104 "name": "BaseBdev2", 00:21:48.104 "uuid": "afd7633f-d283-41c1-868e-87ee5a4d2faa", 00:21:48.104 "is_configured": true, 00:21:48.104 "data_offset": 0, 00:21:48.104 "data_size": 65536 00:21:48.104 }, 00:21:48.104 { 00:21:48.104 "name": "BaseBdev3", 00:21:48.104 "uuid": "8aaf1405-e9f2-48b4-8162-a24854fe49e1", 00:21:48.104 "is_configured": true, 00:21:48.104 "data_offset": 0, 00:21:48.104 "data_size": 65536 00:21:48.104 }, 00:21:48.104 { 00:21:48.104 "name": "BaseBdev4", 00:21:48.105 "uuid": "8aefb69e-c042-46a2-8fb7-d3f8524f3205", 00:21:48.105 "is_configured": true, 00:21:48.105 "data_offset": 0, 00:21:48.105 "data_size": 65536 00:21:48.105 } 00:21:48.105 ] 00:21:48.105 }' 00:21:48.105 17:00:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:48.105 17:00:36 -- common/autotest_common.sh@10 -- # set +x 00:21:48.720 17:00:37 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:48.720 17:00:37 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:49.033 [2024-11-05 17:00:37.655444] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:49.033 17:00:37 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:49.033 17:00:37 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.033 17:00:37 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:49.033 17:00:37 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:49.033 17:00:37 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:49.033 17:00:37 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:49.033 17:00:37 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:49.033 17:00:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:49.033 17:00:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:49.033 17:00:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:49.033 17:00:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:49.033 17:00:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:49.033 17:00:37 -- bdev/nbd_common.sh@12 -- # local i 00:21:49.033 17:00:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:49.033 17:00:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:49.033 17:00:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:49.292 [2024-11-05 17:00:38.051359] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:49.292 /dev/nbd0 00:21:49.292 17:00:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:49.292 17:00:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:49.292 17:00:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:49.292 17:00:38 -- common/autotest_common.sh@867 -- # local i 00:21:49.292 17:00:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:49.292 17:00:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:49.292 17:00:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:49.292 17:00:38 -- common/autotest_common.sh@871 -- # break 00:21:49.292 17:00:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:49.292 17:00:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:49.292 17:00:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:49.292 1+0 records in 00:21:49.292 1+0 records out 00:21:49.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903516 s, 4.5 MB/s 00:21:49.292 17:00:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:49.292 17:00:38 -- common/autotest_common.sh@884 -- # size=4096 00:21:49.292 17:00:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:49.292 17:00:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:49.292 17:00:38 -- common/autotest_common.sh@887 -- # return 0 00:21:49.292 17:00:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:49.292 17:00:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:49.292 17:00:38 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:49.292 17:00:38 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:49.292 17:00:38 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:54.558 65536+0 records in 00:21:54.558 65536+0 records out 00:21:54.558 33554432 bytes (34 MB, 32 MiB) copied, 4.87069 s, 6.9 MB/s 00:21:54.558 17:00:43 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:54.558 17:00:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:54.558 17:00:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:54.558 17:00:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:54.558 17:00:43 -- bdev/nbd_common.sh@51 -- # local i 00:21:54.558 17:00:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:54.558 17:00:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:54.558 17:00:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:54.558 17:00:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:54.558 17:00:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:54.558 17:00:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:54.558 17:00:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:54.558 17:00:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:54.558 [2024-11-05 17:00:43.278026] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.558 17:00:43 -- bdev/nbd_common.sh@41 -- # break 00:21:54.558 17:00:43 -- bdev/nbd_common.sh@45 -- # return 0 00:21:54.558 17:00:43 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:54.817 [2024-11-05 17:00:43.461664] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:54.817 17:00:43 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:54.817 17:00:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:54.817 17:00:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:54.817 17:00:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:54.817 17:00:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:54.817 17:00:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:54.817 17:00:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:54.817 17:00:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:54.817 17:00:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:54.817 17:00:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:54.817 17:00:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.817 17:00:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.075 17:00:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:55.075 "name": "raid_bdev1", 00:21:55.075 "uuid": "14bedb7a-028f-41b6-916d-86bb5c30a208", 00:21:55.075 "strip_size_kb": 0, 00:21:55.075 "state": "online", 00:21:55.075 "raid_level": "raid1", 00:21:55.075 "superblock": false, 00:21:55.075 "num_base_bdevs": 4, 00:21:55.075 "num_base_bdevs_discovered": 3, 00:21:55.075 "num_base_bdevs_operational": 3, 00:21:55.075 "base_bdevs_list": [ 00:21:55.075 { 00:21:55.075 "name": null, 00:21:55.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.075 "is_configured": false, 00:21:55.075 "data_offset": 0, 00:21:55.075 "data_size": 65536 00:21:55.075 }, 00:21:55.075 { 00:21:55.075 "name": "BaseBdev2", 00:21:55.075 "uuid": "afd7633f-d283-41c1-868e-87ee5a4d2faa", 00:21:55.075 "is_configured": true, 00:21:55.075 "data_offset": 0, 00:21:55.075 "data_size": 65536 00:21:55.075 }, 00:21:55.075 { 00:21:55.075 "name": "BaseBdev3", 00:21:55.075 "uuid": "8aaf1405-e9f2-48b4-8162-a24854fe49e1", 00:21:55.075 "is_configured": true, 00:21:55.075 "data_offset": 0, 00:21:55.075 "data_size": 65536 00:21:55.075 }, 00:21:55.075 { 00:21:55.075 "name": "BaseBdev4", 00:21:55.075 "uuid": "8aefb69e-c042-46a2-8fb7-d3f8524f3205", 00:21:55.075 "is_configured": true, 00:21:55.075 "data_offset": 0, 00:21:55.075 "data_size": 65536 00:21:55.075 } 00:21:55.075 ] 00:21:55.075 }' 00:21:55.075 17:00:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:55.075 17:00:43 -- common/autotest_common.sh@10 -- # set +x 00:21:55.641 17:00:44 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:55.899 [2024-11-05 17:00:44.565875] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:55.899 [2024-11-05 17:00:44.566064] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:55.899 [2024-11-05 17:00:44.576619] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:21:55.899 [2024-11-05 17:00:44.578642] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:55.899 17:00:44 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:56.832 17:00:45 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:56.832 17:00:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:56.832 17:00:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:56.832 17:00:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:56.832 17:00:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:56.832 17:00:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.832 17:00:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.089 17:00:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:57.089 "name": "raid_bdev1", 00:21:57.089 "uuid": "14bedb7a-028f-41b6-916d-86bb5c30a208", 00:21:57.089 "strip_size_kb": 0, 00:21:57.089 "state": "online", 00:21:57.089 "raid_level": "raid1", 00:21:57.089 "superblock": false, 00:21:57.089 "num_base_bdevs": 4, 00:21:57.089 "num_base_bdevs_discovered": 4, 00:21:57.089 "num_base_bdevs_operational": 4, 00:21:57.089 "process": { 00:21:57.089 "type": "rebuild", 00:21:57.089 "target": "spare", 00:21:57.089 "progress": { 00:21:57.089 "blocks": 24576, 00:21:57.089 "percent": 37 00:21:57.089 } 00:21:57.089 }, 00:21:57.089 "base_bdevs_list": [ 00:21:57.089 { 00:21:57.090 "name": "spare", 00:21:57.090 "uuid": "488c3f1e-ecdc-5378-a359-d59da3f5f50c", 00:21:57.090 "is_configured": true, 00:21:57.090 "data_offset": 0, 00:21:57.090 "data_size": 65536 00:21:57.090 }, 00:21:57.090 { 00:21:57.090 "name": "BaseBdev2", 00:21:57.090 "uuid": "afd7633f-d283-41c1-868e-87ee5a4d2faa", 00:21:57.090 "is_configured": true, 00:21:57.090 "data_offset": 0, 00:21:57.090 "data_size": 65536 00:21:57.090 }, 00:21:57.090 { 00:21:57.090 "name": "BaseBdev3", 00:21:57.090 "uuid": "8aaf1405-e9f2-48b4-8162-a24854fe49e1", 00:21:57.090 "is_configured": true, 00:21:57.090 "data_offset": 0, 00:21:57.090 "data_size": 65536 00:21:57.090 }, 00:21:57.090 { 00:21:57.090 "name": "BaseBdev4", 00:21:57.090 "uuid": "8aefb69e-c042-46a2-8fb7-d3f8524f3205", 00:21:57.090 "is_configured": true, 00:21:57.090 "data_offset": 0, 00:21:57.090 "data_size": 65536 00:21:57.090 } 00:21:57.090 ] 00:21:57.090 }' 00:21:57.090 17:00:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:57.090 17:00:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:57.090 17:00:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:57.090 17:00:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.090 17:00:45 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:57.348 [2024-11-05 17:00:46.200928] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:57.606 [2024-11-05 17:00:46.287741] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:57.606 [2024-11-05 17:00:46.288000] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.606 17:00:46 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:57.606 17:00:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:57.606 17:00:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:57.606 17:00:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:57.606 17:00:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:57.606 17:00:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:57.606 17:00:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:57.606 17:00:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:57.606 17:00:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:57.606 17:00:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:57.606 17:00:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.606 17:00:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.864 17:00:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:57.864 "name": "raid_bdev1", 00:21:57.864 "uuid": "14bedb7a-028f-41b6-916d-86bb5c30a208", 00:21:57.864 "strip_size_kb": 0, 00:21:57.864 "state": "online", 00:21:57.864 "raid_level": "raid1", 00:21:57.864 "superblock": false, 00:21:57.864 "num_base_bdevs": 4, 00:21:57.864 "num_base_bdevs_discovered": 3, 00:21:57.864 "num_base_bdevs_operational": 3, 00:21:57.864 "base_bdevs_list": [ 00:21:57.864 { 00:21:57.864 "name": null, 00:21:57.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.864 "is_configured": false, 00:21:57.864 "data_offset": 0, 00:21:57.864 "data_size": 65536 00:21:57.864 }, 00:21:57.864 { 00:21:57.864 "name": "BaseBdev2", 00:21:57.864 "uuid": "afd7633f-d283-41c1-868e-87ee5a4d2faa", 00:21:57.864 "is_configured": true, 00:21:57.864 "data_offset": 0, 00:21:57.864 "data_size": 65536 00:21:57.864 }, 00:21:57.864 { 00:21:57.864 "name": "BaseBdev3", 00:21:57.864 "uuid": "8aaf1405-e9f2-48b4-8162-a24854fe49e1", 00:21:57.864 "is_configured": true, 00:21:57.864 "data_offset": 0, 00:21:57.864 "data_size": 65536 00:21:57.864 }, 00:21:57.864 { 00:21:57.864 "name": "BaseBdev4", 00:21:57.864 "uuid": "8aefb69e-c042-46a2-8fb7-d3f8524f3205", 00:21:57.864 "is_configured": true, 00:21:57.864 "data_offset": 0, 00:21:57.864 "data_size": 65536 00:21:57.864 } 00:21:57.864 ] 00:21:57.864 }' 00:21:57.864 17:00:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:57.864 17:00:46 -- common/autotest_common.sh@10 -- # set +x 00:21:58.429 17:00:47 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:58.429 17:00:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:58.429 17:00:47 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:58.429 17:00:47 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:58.429 17:00:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:58.429 17:00:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.429 17:00:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.687 17:00:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:58.687 "name": "raid_bdev1", 00:21:58.687 "uuid": "14bedb7a-028f-41b6-916d-86bb5c30a208", 00:21:58.687 "strip_size_kb": 0, 00:21:58.687 "state": "online", 00:21:58.687 "raid_level": "raid1", 00:21:58.687 "superblock": false, 00:21:58.687 "num_base_bdevs": 4, 00:21:58.687 "num_base_bdevs_discovered": 3, 00:21:58.687 "num_base_bdevs_operational": 3, 00:21:58.687 "base_bdevs_list": [ 00:21:58.687 { 00:21:58.687 "name": null, 00:21:58.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.687 "is_configured": false, 00:21:58.687 "data_offset": 0, 00:21:58.687 "data_size": 65536 00:21:58.687 }, 00:21:58.687 { 00:21:58.687 "name": "BaseBdev2", 00:21:58.687 "uuid": "afd7633f-d283-41c1-868e-87ee5a4d2faa", 00:21:58.687 "is_configured": true, 00:21:58.687 "data_offset": 0, 00:21:58.687 "data_size": 65536 00:21:58.687 }, 00:21:58.687 { 00:21:58.687 "name": "BaseBdev3", 00:21:58.687 "uuid": "8aaf1405-e9f2-48b4-8162-a24854fe49e1", 00:21:58.687 "is_configured": true, 00:21:58.687 "data_offset": 0, 00:21:58.687 "data_size": 65536 00:21:58.687 }, 00:21:58.687 { 00:21:58.687 "name": "BaseBdev4", 00:21:58.687 "uuid": "8aefb69e-c042-46a2-8fb7-d3f8524f3205", 00:21:58.687 "is_configured": true, 00:21:58.688 "data_offset": 0, 00:21:58.688 "data_size": 65536 00:21:58.688 } 00:21:58.688 ] 00:21:58.688 }' 00:21:58.688 17:00:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:58.688 17:00:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:58.688 17:00:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:58.688 17:00:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:58.688 17:00:47 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:58.945 [2024-11-05 17:00:47.680372] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:58.945 [2024-11-05 17:00:47.680553] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:58.945 [2024-11-05 17:00:47.691842] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09890 00:21:58.945 [2024-11-05 17:00:47.693842] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:58.945 17:00:47 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:59.882 17:00:48 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:59.882 17:00:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:59.882 17:00:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:59.882 17:00:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:59.882 17:00:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:59.882 17:00:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.882 17:00:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.140 17:00:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:00.140 "name": "raid_bdev1", 00:22:00.140 "uuid": "14bedb7a-028f-41b6-916d-86bb5c30a208", 00:22:00.140 "strip_size_kb": 0, 00:22:00.140 "state": "online", 00:22:00.140 "raid_level": "raid1", 00:22:00.140 "superblock": false, 00:22:00.140 "num_base_bdevs": 4, 00:22:00.140 "num_base_bdevs_discovered": 4, 00:22:00.140 "num_base_bdevs_operational": 4, 00:22:00.140 "process": { 00:22:00.141 "type": "rebuild", 00:22:00.141 "target": "spare", 00:22:00.141 "progress": { 00:22:00.141 "blocks": 24576, 00:22:00.141 "percent": 37 00:22:00.141 } 00:22:00.141 }, 00:22:00.141 "base_bdevs_list": [ 00:22:00.141 { 00:22:00.141 "name": "spare", 00:22:00.141 "uuid": "488c3f1e-ecdc-5378-a359-d59da3f5f50c", 00:22:00.141 "is_configured": true, 00:22:00.141 "data_offset": 0, 00:22:00.141 "data_size": 65536 00:22:00.141 }, 00:22:00.141 { 00:22:00.141 "name": "BaseBdev2", 00:22:00.141 "uuid": "afd7633f-d283-41c1-868e-87ee5a4d2faa", 00:22:00.141 "is_configured": true, 00:22:00.141 "data_offset": 0, 00:22:00.141 "data_size": 65536 00:22:00.141 }, 00:22:00.141 { 00:22:00.141 "name": "BaseBdev3", 00:22:00.141 "uuid": "8aaf1405-e9f2-48b4-8162-a24854fe49e1", 00:22:00.141 "is_configured": true, 00:22:00.141 "data_offset": 0, 00:22:00.141 "data_size": 65536 00:22:00.141 }, 00:22:00.141 { 00:22:00.141 "name": "BaseBdev4", 00:22:00.141 "uuid": "8aefb69e-c042-46a2-8fb7-d3f8524f3205", 00:22:00.141 "is_configured": true, 00:22:00.141 "data_offset": 0, 00:22:00.141 "data_size": 65536 00:22:00.141 } 00:22:00.141 ] 00:22:00.141 }' 00:22:00.141 17:00:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:00.141 17:00:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:00.141 17:00:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:00.141 17:00:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:00.141 17:00:49 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:00.141 17:00:49 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:00.141 17:00:49 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:00.141 17:00:49 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:00.141 17:00:49 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:00.399 [2024-11-05 17:00:49.264024] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:00.657 [2024-11-05 17:00:49.302441] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09890 00:22:00.657 17:00:49 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:00.657 17:00:49 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:00.657 17:00:49 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:00.657 17:00:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:00.657 17:00:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:00.657 17:00:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:00.657 17:00:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:00.657 17:00:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.657 17:00:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.657 17:00:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:00.657 "name": "raid_bdev1", 00:22:00.657 "uuid": "14bedb7a-028f-41b6-916d-86bb5c30a208", 00:22:00.657 "strip_size_kb": 0, 00:22:00.657 "state": "online", 00:22:00.657 "raid_level": "raid1", 00:22:00.657 "superblock": false, 00:22:00.657 "num_base_bdevs": 4, 00:22:00.657 "num_base_bdevs_discovered": 3, 00:22:00.657 "num_base_bdevs_operational": 3, 00:22:00.657 "process": { 00:22:00.657 "type": "rebuild", 00:22:00.657 "target": "spare", 00:22:00.657 "progress": { 00:22:00.657 "blocks": 36864, 00:22:00.657 "percent": 56 00:22:00.657 } 00:22:00.657 }, 00:22:00.657 "base_bdevs_list": [ 00:22:00.657 { 00:22:00.657 "name": "spare", 00:22:00.657 "uuid": "488c3f1e-ecdc-5378-a359-d59da3f5f50c", 00:22:00.657 "is_configured": true, 00:22:00.657 "data_offset": 0, 00:22:00.657 "data_size": 65536 00:22:00.657 }, 00:22:00.657 { 00:22:00.657 "name": null, 00:22:00.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.657 "is_configured": false, 00:22:00.657 "data_offset": 0, 00:22:00.657 "data_size": 65536 00:22:00.657 }, 00:22:00.657 { 00:22:00.657 "name": "BaseBdev3", 00:22:00.657 "uuid": "8aaf1405-e9f2-48b4-8162-a24854fe49e1", 00:22:00.657 "is_configured": true, 00:22:00.657 "data_offset": 0, 00:22:00.657 "data_size": 65536 00:22:00.657 }, 00:22:00.657 { 00:22:00.657 "name": "BaseBdev4", 00:22:00.657 "uuid": "8aefb69e-c042-46a2-8fb7-d3f8524f3205", 00:22:00.657 "is_configured": true, 00:22:00.657 "data_offset": 0, 00:22:00.657 "data_size": 65536 00:22:00.657 } 00:22:00.657 ] 00:22:00.657 }' 00:22:00.657 17:00:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:00.915 17:00:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:00.915 17:00:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:00.915 17:00:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:00.916 17:00:49 -- bdev/bdev_raid.sh@657 -- # local timeout=489 00:22:00.916 17:00:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:00.916 17:00:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:00.916 17:00:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:00.916 17:00:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:00.916 17:00:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:00.916 17:00:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:00.916 17:00:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.916 17:00:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.173 17:00:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:01.173 "name": "raid_bdev1", 00:22:01.173 "uuid": "14bedb7a-028f-41b6-916d-86bb5c30a208", 00:22:01.173 "strip_size_kb": 0, 00:22:01.173 "state": "online", 00:22:01.173 "raid_level": "raid1", 00:22:01.173 "superblock": false, 00:22:01.173 "num_base_bdevs": 4, 00:22:01.173 "num_base_bdevs_discovered": 3, 00:22:01.173 "num_base_bdevs_operational": 3, 00:22:01.173 "process": { 00:22:01.173 "type": "rebuild", 00:22:01.173 "target": "spare", 00:22:01.173 "progress": { 00:22:01.173 "blocks": 43008, 00:22:01.173 "percent": 65 00:22:01.173 } 00:22:01.173 }, 00:22:01.173 "base_bdevs_list": [ 00:22:01.173 { 00:22:01.173 "name": "spare", 00:22:01.173 "uuid": "488c3f1e-ecdc-5378-a359-d59da3f5f50c", 00:22:01.173 "is_configured": true, 00:22:01.173 "data_offset": 0, 00:22:01.173 "data_size": 65536 00:22:01.173 }, 00:22:01.173 { 00:22:01.173 "name": null, 00:22:01.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.173 "is_configured": false, 00:22:01.173 "data_offset": 0, 00:22:01.173 "data_size": 65536 00:22:01.173 }, 00:22:01.173 { 00:22:01.173 "name": "BaseBdev3", 00:22:01.173 "uuid": "8aaf1405-e9f2-48b4-8162-a24854fe49e1", 00:22:01.173 "is_configured": true, 00:22:01.173 "data_offset": 0, 00:22:01.173 "data_size": 65536 00:22:01.173 }, 00:22:01.173 { 00:22:01.173 "name": "BaseBdev4", 00:22:01.173 "uuid": "8aefb69e-c042-46a2-8fb7-d3f8524f3205", 00:22:01.173 "is_configured": true, 00:22:01.173 "data_offset": 0, 00:22:01.173 "data_size": 65536 00:22:01.173 } 00:22:01.173 ] 00:22:01.173 }' 00:22:01.173 17:00:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:01.173 17:00:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:01.174 17:00:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:01.174 17:00:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:01.174 17:00:49 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:02.107 [2024-11-05 17:00:50.910830] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:02.107 [2024-11-05 17:00:50.911124] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:02.107 [2024-11-05 17:00:50.911347] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.107 17:00:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:02.107 17:00:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:02.107 17:00:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:02.107 17:00:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:02.107 17:00:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:02.107 17:00:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:02.107 17:00:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.107 17:00:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.365 17:00:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:02.365 "name": "raid_bdev1", 00:22:02.365 "uuid": "14bedb7a-028f-41b6-916d-86bb5c30a208", 00:22:02.365 "strip_size_kb": 0, 00:22:02.365 "state": "online", 00:22:02.365 "raid_level": "raid1", 00:22:02.365 "superblock": false, 00:22:02.365 "num_base_bdevs": 4, 00:22:02.365 "num_base_bdevs_discovered": 3, 00:22:02.365 "num_base_bdevs_operational": 3, 00:22:02.365 "base_bdevs_list": [ 00:22:02.365 { 00:22:02.365 "name": "spare", 00:22:02.365 "uuid": "488c3f1e-ecdc-5378-a359-d59da3f5f50c", 00:22:02.365 "is_configured": true, 00:22:02.365 "data_offset": 0, 00:22:02.365 "data_size": 65536 00:22:02.365 }, 00:22:02.365 { 00:22:02.365 "name": null, 00:22:02.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.365 "is_configured": false, 00:22:02.365 "data_offset": 0, 00:22:02.365 "data_size": 65536 00:22:02.365 }, 00:22:02.365 { 00:22:02.365 "name": "BaseBdev3", 00:22:02.365 "uuid": "8aaf1405-e9f2-48b4-8162-a24854fe49e1", 00:22:02.365 "is_configured": true, 00:22:02.365 "data_offset": 0, 00:22:02.365 "data_size": 65536 00:22:02.365 }, 00:22:02.365 { 00:22:02.365 "name": "BaseBdev4", 00:22:02.365 "uuid": "8aefb69e-c042-46a2-8fb7-d3f8524f3205", 00:22:02.365 "is_configured": true, 00:22:02.365 "data_offset": 0, 00:22:02.365 "data_size": 65536 00:22:02.365 } 00:22:02.365 ] 00:22:02.365 }' 00:22:02.365 17:00:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:02.365 17:00:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:02.365 17:00:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:02.623 17:00:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:02.623 17:00:51 -- bdev/bdev_raid.sh@660 -- # break 00:22:02.623 17:00:51 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:02.623 17:00:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:02.623 17:00:51 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:02.623 17:00:51 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:02.623 17:00:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:02.623 17:00:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.623 17:00:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.881 17:00:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:02.881 "name": "raid_bdev1", 00:22:02.881 "uuid": "14bedb7a-028f-41b6-916d-86bb5c30a208", 00:22:02.881 "strip_size_kb": 0, 00:22:02.881 "state": "online", 00:22:02.881 "raid_level": "raid1", 00:22:02.881 "superblock": false, 00:22:02.881 "num_base_bdevs": 4, 00:22:02.881 "num_base_bdevs_discovered": 3, 00:22:02.881 "num_base_bdevs_operational": 3, 00:22:02.881 "base_bdevs_list": [ 00:22:02.881 { 00:22:02.881 "name": "spare", 00:22:02.881 "uuid": "488c3f1e-ecdc-5378-a359-d59da3f5f50c", 00:22:02.881 "is_configured": true, 00:22:02.881 "data_offset": 0, 00:22:02.881 "data_size": 65536 00:22:02.881 }, 00:22:02.881 { 00:22:02.881 "name": null, 00:22:02.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.881 "is_configured": false, 00:22:02.881 "data_offset": 0, 00:22:02.881 "data_size": 65536 00:22:02.881 }, 00:22:02.881 { 00:22:02.881 "name": "BaseBdev3", 00:22:02.881 "uuid": "8aaf1405-e9f2-48b4-8162-a24854fe49e1", 00:22:02.881 "is_configured": true, 00:22:02.881 "data_offset": 0, 00:22:02.881 "data_size": 65536 00:22:02.881 }, 00:22:02.881 { 00:22:02.881 "name": "BaseBdev4", 00:22:02.881 "uuid": "8aefb69e-c042-46a2-8fb7-d3f8524f3205", 00:22:02.881 "is_configured": true, 00:22:02.881 "data_offset": 0, 00:22:02.881 "data_size": 65536 00:22:02.881 } 00:22:02.881 ] 00:22:02.881 }' 00:22:02.881 17:00:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:02.881 17:00:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:02.881 17:00:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:02.881 17:00:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:02.881 17:00:51 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:02.881 17:00:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:02.881 17:00:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:02.881 17:00:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:02.881 17:00:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:02.881 17:00:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:02.881 17:00:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:02.881 17:00:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:02.881 17:00:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:02.881 17:00:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:02.881 17:00:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.881 17:00:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.139 17:00:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:03.139 "name": "raid_bdev1", 00:22:03.139 "uuid": "14bedb7a-028f-41b6-916d-86bb5c30a208", 00:22:03.139 "strip_size_kb": 0, 00:22:03.139 "state": "online", 00:22:03.139 "raid_level": "raid1", 00:22:03.139 "superblock": false, 00:22:03.139 "num_base_bdevs": 4, 00:22:03.139 "num_base_bdevs_discovered": 3, 00:22:03.139 "num_base_bdevs_operational": 3, 00:22:03.139 "base_bdevs_list": [ 00:22:03.139 { 00:22:03.139 "name": "spare", 00:22:03.139 "uuid": "488c3f1e-ecdc-5378-a359-d59da3f5f50c", 00:22:03.139 "is_configured": true, 00:22:03.139 "data_offset": 0, 00:22:03.139 "data_size": 65536 00:22:03.139 }, 00:22:03.139 { 00:22:03.139 "name": null, 00:22:03.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.139 "is_configured": false, 00:22:03.139 "data_offset": 0, 00:22:03.139 "data_size": 65536 00:22:03.139 }, 00:22:03.139 { 00:22:03.139 "name": "BaseBdev3", 00:22:03.139 "uuid": "8aaf1405-e9f2-48b4-8162-a24854fe49e1", 00:22:03.139 "is_configured": true, 00:22:03.139 "data_offset": 0, 00:22:03.139 "data_size": 65536 00:22:03.139 }, 00:22:03.139 { 00:22:03.139 "name": "BaseBdev4", 00:22:03.139 "uuid": "8aefb69e-c042-46a2-8fb7-d3f8524f3205", 00:22:03.139 "is_configured": true, 00:22:03.139 "data_offset": 0, 00:22:03.139 "data_size": 65536 00:22:03.139 } 00:22:03.139 ] 00:22:03.139 }' 00:22:03.139 17:00:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:03.139 17:00:51 -- common/autotest_common.sh@10 -- # set +x 00:22:03.705 17:00:52 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:03.962 [2024-11-05 17:00:52.756264] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:03.962 [2024-11-05 17:00:52.756415] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:03.962 [2024-11-05 17:00:52.756594] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:03.962 [2024-11-05 17:00:52.756803] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:03.963 [2024-11-05 17:00:52.756902] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:22:03.963 17:00:52 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:03.963 17:00:52 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.220 17:00:53 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:04.220 17:00:53 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:04.220 17:00:53 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:04.220 17:00:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:04.220 17:00:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:04.220 17:00:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:04.220 17:00:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:04.220 17:00:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:04.220 17:00:53 -- bdev/nbd_common.sh@12 -- # local i 00:22:04.220 17:00:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:04.220 17:00:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:04.220 17:00:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:04.478 /dev/nbd0 00:22:04.478 17:00:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:04.478 17:00:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:04.478 17:00:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:04.478 17:00:53 -- common/autotest_common.sh@867 -- # local i 00:22:04.478 17:00:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:04.478 17:00:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:04.478 17:00:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:04.478 17:00:53 -- common/autotest_common.sh@871 -- # break 00:22:04.478 17:00:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:04.478 17:00:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:04.478 17:00:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:04.478 1+0 records in 00:22:04.478 1+0 records out 00:22:04.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509267 s, 8.0 MB/s 00:22:04.478 17:00:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.478 17:00:53 -- common/autotest_common.sh@884 -- # size=4096 00:22:04.478 17:00:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.478 17:00:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:04.478 17:00:53 -- common/autotest_common.sh@887 -- # return 0 00:22:04.478 17:00:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:04.478 17:00:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:04.478 17:00:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:04.736 /dev/nbd1 00:22:04.736 17:00:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:04.736 17:00:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:04.736 17:00:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:04.736 17:00:53 -- common/autotest_common.sh@867 -- # local i 00:22:04.736 17:00:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:04.736 17:00:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:04.736 17:00:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:04.736 17:00:53 -- common/autotest_common.sh@871 -- # break 00:22:04.736 17:00:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:04.736 17:00:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:04.736 17:00:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:04.736 1+0 records in 00:22:04.736 1+0 records out 00:22:04.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561973 s, 7.3 MB/s 00:22:04.736 17:00:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.736 17:00:53 -- common/autotest_common.sh@884 -- # size=4096 00:22:04.736 17:00:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.736 17:00:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:04.736 17:00:53 -- common/autotest_common.sh@887 -- # return 0 00:22:04.736 17:00:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:04.736 17:00:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:04.736 17:00:53 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:04.994 17:00:53 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:04.994 17:00:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:04.994 17:00:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:04.994 17:00:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:04.994 17:00:53 -- bdev/nbd_common.sh@51 -- # local i 00:22:04.994 17:00:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:04.994 17:00:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:05.252 17:00:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:05.252 17:00:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:05.252 17:00:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:05.252 17:00:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:05.252 17:00:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:05.252 17:00:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:05.252 17:00:53 -- bdev/nbd_common.sh@41 -- # break 00:22:05.252 17:00:53 -- bdev/nbd_common.sh@45 -- # return 0 00:22:05.252 17:00:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:05.252 17:00:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:05.510 17:00:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:05.510 17:00:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:05.510 17:00:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:05.510 17:00:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:05.510 17:00:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:05.510 17:00:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:05.510 17:00:54 -- bdev/nbd_common.sh@41 -- # break 00:22:05.510 17:00:54 -- bdev/nbd_common.sh@45 -- # return 0 00:22:05.510 17:00:54 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:05.510 17:00:54 -- bdev/bdev_raid.sh@709 -- # killprocess 124668 00:22:05.510 17:00:54 -- common/autotest_common.sh@936 -- # '[' -z 124668 ']' 00:22:05.510 17:00:54 -- common/autotest_common.sh@940 -- # kill -0 124668 00:22:05.510 17:00:54 -- common/autotest_common.sh@941 -- # uname 00:22:05.510 17:00:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:05.510 17:00:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124668 00:22:05.510 17:00:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:05.510 17:00:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:05.510 17:00:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124668' 00:22:05.510 killing process with pid 124668 00:22:05.510 17:00:54 -- common/autotest_common.sh@955 -- # kill 124668 00:22:05.510 Received shutdown signal, test time was about 60.000000 seconds 00:22:05.510 00:22:05.510 Latency(us) 00:22:05.510 [2024-11-05T17:00:54.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.510 [2024-11-05T17:00:54.387Z] =================================================================================================================== 00:22:05.510 [2024-11-05T17:00:54.387Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:05.510 17:00:54 -- common/autotest_common.sh@960 -- # wait 124668 00:22:05.511 [2024-11-05 17:00:54.193406] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:05.769 [2024-11-05 17:00:54.509949] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:06.704 ************************************ 00:22:06.704 END TEST raid_rebuild_test 00:22:06.704 ************************************ 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:06.704 00:22:06.704 real 0m21.692s 00:22:06.704 user 0m30.294s 00:22:06.704 sys 0m3.354s 00:22:06.704 17:00:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:06.704 17:00:55 -- common/autotest_common.sh@10 -- # set +x 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:22:06.704 17:00:55 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:06.704 17:00:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:06.704 17:00:55 -- common/autotest_common.sh@10 -- # set +x 00:22:06.704 ************************************ 00:22:06.704 START TEST raid_rebuild_test_sb 00:22:06.704 ************************************ 00:22:06.704 17:00:55 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true false 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@544 -- # raid_pid=125209 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@545 -- # waitforlisten 125209 /var/tmp/spdk-raid.sock 00:22:06.704 17:00:55 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:06.704 17:00:55 -- common/autotest_common.sh@829 -- # '[' -z 125209 ']' 00:22:06.704 17:00:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:06.704 17:00:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:06.704 17:00:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:06.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:06.704 17:00:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:06.704 17:00:55 -- common/autotest_common.sh@10 -- # set +x 00:22:06.704 [2024-11-05 17:00:55.564952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:06.704 [2024-11-05 17:00:55.565293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125209 ] 00:22:06.704 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:06.704 Zero copy mechanism will not be used. 00:22:06.962 [2024-11-05 17:00:55.717274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.231 [2024-11-05 17:00:55.893094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.231 [2024-11-05 17:00:56.064946] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:07.811 17:00:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.811 17:00:56 -- common/autotest_common.sh@862 -- # return 0 00:22:07.812 17:00:56 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:07.812 17:00:56 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:07.812 17:00:56 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:08.069 BaseBdev1_malloc 00:22:08.069 17:00:56 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:08.069 [2024-11-05 17:00:56.939547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:08.069 [2024-11-05 17:00:56.939824] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.069 [2024-11-05 17:00:56.939968] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:22:08.069 [2024-11-05 17:00:56.940112] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.069 [2024-11-05 17:00:56.942583] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.069 [2024-11-05 17:00:56.942758] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:08.069 BaseBdev1 00:22:08.069 17:00:56 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:08.069 17:00:56 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:08.069 17:00:56 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:08.635 BaseBdev2_malloc 00:22:08.635 17:00:57 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:08.635 [2024-11-05 17:00:57.425244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:08.635 [2024-11-05 17:00:57.425506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.635 [2024-11-05 17:00:57.425710] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:08.635 [2024-11-05 17:00:57.425966] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.635 [2024-11-05 17:00:57.429325] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.635 [2024-11-05 17:00:57.429539] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:08.635 BaseBdev2 00:22:08.635 17:00:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:08.635 17:00:57 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:08.635 17:00:57 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:08.893 BaseBdev3_malloc 00:22:08.893 17:00:57 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:09.151 [2024-11-05 17:00:57.891413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:09.151 [2024-11-05 17:00:57.891606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:09.151 [2024-11-05 17:00:57.891683] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:22:09.151 [2024-11-05 17:00:57.891940] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:09.151 [2024-11-05 17:00:57.894335] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:09.151 [2024-11-05 17:00:57.894505] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:09.151 BaseBdev3 00:22:09.151 17:00:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:09.151 17:00:57 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:09.151 17:00:57 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:09.409 BaseBdev4_malloc 00:22:09.409 17:00:58 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:09.666 [2024-11-05 17:00:58.345065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:09.666 [2024-11-05 17:00:58.345265] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:09.666 [2024-11-05 17:00:58.345338] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:22:09.666 [2024-11-05 17:00:58.345488] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:09.666 [2024-11-05 17:00:58.347835] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:09.666 [2024-11-05 17:00:58.348003] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:09.666 BaseBdev4 00:22:09.666 17:00:58 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:09.923 spare_malloc 00:22:09.923 17:00:58 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:09.923 spare_delay 00:22:09.923 17:00:58 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:10.180 [2024-11-05 17:00:58.935653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:10.180 [2024-11-05 17:00:58.935854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.180 [2024-11-05 17:00:58.935923] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:10.180 [2024-11-05 17:00:58.936084] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.180 [2024-11-05 17:00:58.938533] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.180 [2024-11-05 17:00:58.938740] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:10.180 spare 00:22:10.180 17:00:58 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:10.438 [2024-11-05 17:00:59.127765] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:10.438 [2024-11-05 17:00:59.129837] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:10.438 [2024-11-05 17:00:59.130040] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:10.438 [2024-11-05 17:00:59.130140] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:10.438 [2024-11-05 17:00:59.130451] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:22:10.438 [2024-11-05 17:00:59.130675] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:10.438 [2024-11-05 17:00:59.130846] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:10.438 [2024-11-05 17:00:59.131390] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:22:10.438 [2024-11-05 17:00:59.131496] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:22:10.438 [2024-11-05 17:00:59.131750] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.438 17:00:59 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:10.438 17:00:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:10.438 17:00:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:10.438 17:00:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:10.438 17:00:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:10.438 17:00:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:10.438 17:00:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:10.438 17:00:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:10.438 17:00:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:10.438 17:00:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:10.438 17:00:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.438 17:00:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.438 17:00:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:10.438 "name": "raid_bdev1", 00:22:10.438 "uuid": "81b098a4-ae88-44eb-8e07-30d9788f3539", 00:22:10.438 "strip_size_kb": 0, 00:22:10.438 "state": "online", 00:22:10.438 "raid_level": "raid1", 00:22:10.438 "superblock": true, 00:22:10.438 "num_base_bdevs": 4, 00:22:10.438 "num_base_bdevs_discovered": 4, 00:22:10.438 "num_base_bdevs_operational": 4, 00:22:10.438 "base_bdevs_list": [ 00:22:10.438 { 00:22:10.438 "name": "BaseBdev1", 00:22:10.438 "uuid": "8e408e88-68fa-5e72-ae34-3177f693fcb8", 00:22:10.438 "is_configured": true, 00:22:10.438 "data_offset": 2048, 00:22:10.438 "data_size": 63488 00:22:10.438 }, 00:22:10.438 { 00:22:10.438 "name": "BaseBdev2", 00:22:10.438 "uuid": "df242031-99af-5e6a-8117-0ef2f4e9b1d8", 00:22:10.438 "is_configured": true, 00:22:10.438 "data_offset": 2048, 00:22:10.438 "data_size": 63488 00:22:10.438 }, 00:22:10.438 { 00:22:10.438 "name": "BaseBdev3", 00:22:10.438 "uuid": "32939f8a-20d4-51e1-be75-fa9c3d2044f9", 00:22:10.438 "is_configured": true, 00:22:10.438 "data_offset": 2048, 00:22:10.438 "data_size": 63488 00:22:10.438 }, 00:22:10.438 { 00:22:10.438 "name": "BaseBdev4", 00:22:10.438 "uuid": "c2505fde-78b3-57c6-b967-b2bcb30644bf", 00:22:10.438 "is_configured": true, 00:22:10.438 "data_offset": 2048, 00:22:10.438 "data_size": 63488 00:22:10.438 } 00:22:10.438 ] 00:22:10.438 }' 00:22:10.438 17:00:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:10.438 17:00:59 -- common/autotest_common.sh@10 -- # set +x 00:22:11.370 17:00:59 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:11.370 17:00:59 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:11.370 [2024-11-05 17:01:00.220136] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:11.370 17:01:00 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:11.370 17:01:00 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.370 17:01:00 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:11.628 17:01:00 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:11.628 17:01:00 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:11.628 17:01:00 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:11.628 17:01:00 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:11.628 17:01:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:11.628 17:01:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:11.628 17:01:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:11.628 17:01:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:11.628 17:01:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:11.628 17:01:00 -- bdev/nbd_common.sh@12 -- # local i 00:22:11.628 17:01:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:11.628 17:01:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:11.628 17:01:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:11.887 [2024-11-05 17:01:00.644000] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:11.887 /dev/nbd0 00:22:11.887 17:01:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:11.887 17:01:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:11.887 17:01:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:11.887 17:01:00 -- common/autotest_common.sh@867 -- # local i 00:22:11.887 17:01:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:11.887 17:01:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:11.887 17:01:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:11.887 17:01:00 -- common/autotest_common.sh@871 -- # break 00:22:11.887 17:01:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:11.887 17:01:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:11.887 17:01:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:11.887 1+0 records in 00:22:11.887 1+0 records out 00:22:11.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390904 s, 10.5 MB/s 00:22:11.887 17:01:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:11.887 17:01:00 -- common/autotest_common.sh@884 -- # size=4096 00:22:11.887 17:01:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:11.887 17:01:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:11.887 17:01:00 -- common/autotest_common.sh@887 -- # return 0 00:22:11.887 17:01:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:11.887 17:01:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:11.887 17:01:00 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:22:11.887 17:01:00 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:22:11.887 17:01:00 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:22:18.442 63488+0 records in 00:22:18.442 63488+0 records out 00:22:18.442 32505856 bytes (33 MB, 31 MiB) copied, 5.78182 s, 5.6 MB/s 00:22:18.442 17:01:06 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:18.442 17:01:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:18.442 17:01:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:18.442 17:01:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:18.442 17:01:06 -- bdev/nbd_common.sh@51 -- # local i 00:22:18.442 17:01:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:18.442 17:01:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:18.442 17:01:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:18.442 17:01:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:18.442 17:01:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:18.442 17:01:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:18.442 17:01:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:18.442 17:01:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:18.442 17:01:06 -- bdev/nbd_common.sh@41 -- # break 00:22:18.442 17:01:06 -- bdev/nbd_common.sh@45 -- # return 0 00:22:18.442 17:01:06 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:18.442 [2024-11-05 17:01:06.762051] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:18.442 [2024-11-05 17:01:06.996454] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:18.442 17:01:07 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:18.442 17:01:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:18.442 17:01:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:18.442 17:01:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:18.442 17:01:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:18.442 17:01:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:18.442 17:01:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:18.442 17:01:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:18.442 17:01:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:18.442 17:01:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:18.442 17:01:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.442 17:01:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.442 17:01:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:18.442 "name": "raid_bdev1", 00:22:18.442 "uuid": "81b098a4-ae88-44eb-8e07-30d9788f3539", 00:22:18.442 "strip_size_kb": 0, 00:22:18.442 "state": "online", 00:22:18.442 "raid_level": "raid1", 00:22:18.442 "superblock": true, 00:22:18.442 "num_base_bdevs": 4, 00:22:18.442 "num_base_bdevs_discovered": 3, 00:22:18.442 "num_base_bdevs_operational": 3, 00:22:18.442 "base_bdevs_list": [ 00:22:18.442 { 00:22:18.442 "name": null, 00:22:18.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.442 "is_configured": false, 00:22:18.442 "data_offset": 2048, 00:22:18.442 "data_size": 63488 00:22:18.442 }, 00:22:18.442 { 00:22:18.442 "name": "BaseBdev2", 00:22:18.442 "uuid": "df242031-99af-5e6a-8117-0ef2f4e9b1d8", 00:22:18.442 "is_configured": true, 00:22:18.442 "data_offset": 2048, 00:22:18.442 "data_size": 63488 00:22:18.442 }, 00:22:18.442 { 00:22:18.442 "name": "BaseBdev3", 00:22:18.442 "uuid": "32939f8a-20d4-51e1-be75-fa9c3d2044f9", 00:22:18.442 "is_configured": true, 00:22:18.442 "data_offset": 2048, 00:22:18.442 "data_size": 63488 00:22:18.442 }, 00:22:18.442 { 00:22:18.442 "name": "BaseBdev4", 00:22:18.442 "uuid": "c2505fde-78b3-57c6-b967-b2bcb30644bf", 00:22:18.442 "is_configured": true, 00:22:18.442 "data_offset": 2048, 00:22:18.442 "data_size": 63488 00:22:18.442 } 00:22:18.442 ] 00:22:18.442 }' 00:22:18.442 17:01:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:18.442 17:01:07 -- common/autotest_common.sh@10 -- # set +x 00:22:19.008 17:01:07 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:19.266 [2024-11-05 17:01:08.048654] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:19.266 [2024-11-05 17:01:08.048861] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:19.266 [2024-11-05 17:01:08.060517] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:22:19.266 [2024-11-05 17:01:08.062513] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:19.266 17:01:08 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:20.200 17:01:09 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:20.200 17:01:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:20.200 17:01:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:20.200 17:01:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:20.200 17:01:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:20.200 17:01:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.200 17:01:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.458 17:01:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:20.458 "name": "raid_bdev1", 00:22:20.458 "uuid": "81b098a4-ae88-44eb-8e07-30d9788f3539", 00:22:20.458 "strip_size_kb": 0, 00:22:20.458 "state": "online", 00:22:20.458 "raid_level": "raid1", 00:22:20.458 "superblock": true, 00:22:20.458 "num_base_bdevs": 4, 00:22:20.458 "num_base_bdevs_discovered": 4, 00:22:20.458 "num_base_bdevs_operational": 4, 00:22:20.458 "process": { 00:22:20.458 "type": "rebuild", 00:22:20.458 "target": "spare", 00:22:20.458 "progress": { 00:22:20.458 "blocks": 24576, 00:22:20.458 "percent": 38 00:22:20.458 } 00:22:20.458 }, 00:22:20.458 "base_bdevs_list": [ 00:22:20.458 { 00:22:20.458 "name": "spare", 00:22:20.458 "uuid": "3ebab75f-443b-5ae3-9ac9-26d7aa9ccd3a", 00:22:20.458 "is_configured": true, 00:22:20.458 "data_offset": 2048, 00:22:20.458 "data_size": 63488 00:22:20.458 }, 00:22:20.458 { 00:22:20.458 "name": "BaseBdev2", 00:22:20.458 "uuid": "df242031-99af-5e6a-8117-0ef2f4e9b1d8", 00:22:20.458 "is_configured": true, 00:22:20.458 "data_offset": 2048, 00:22:20.458 "data_size": 63488 00:22:20.458 }, 00:22:20.458 { 00:22:20.458 "name": "BaseBdev3", 00:22:20.458 "uuid": "32939f8a-20d4-51e1-be75-fa9c3d2044f9", 00:22:20.458 "is_configured": true, 00:22:20.458 "data_offset": 2048, 00:22:20.458 "data_size": 63488 00:22:20.458 }, 00:22:20.458 { 00:22:20.458 "name": "BaseBdev4", 00:22:20.458 "uuid": "c2505fde-78b3-57c6-b967-b2bcb30644bf", 00:22:20.458 "is_configured": true, 00:22:20.458 "data_offset": 2048, 00:22:20.458 "data_size": 63488 00:22:20.458 } 00:22:20.458 ] 00:22:20.458 }' 00:22:20.458 17:01:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:20.716 17:01:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:20.716 17:01:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:20.716 17:01:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:20.716 17:01:09 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:20.974 [2024-11-05 17:01:09.637851] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:20.974 [2024-11-05 17:01:09.671731] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:20.974 [2024-11-05 17:01:09.671948] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.974 17:01:09 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:20.974 17:01:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:20.974 17:01:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:20.974 17:01:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:20.974 17:01:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:20.974 17:01:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:20.974 17:01:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:20.974 17:01:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:20.974 17:01:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:20.974 17:01:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:20.974 17:01:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.974 17:01:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.231 17:01:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:21.231 "name": "raid_bdev1", 00:22:21.231 "uuid": "81b098a4-ae88-44eb-8e07-30d9788f3539", 00:22:21.231 "strip_size_kb": 0, 00:22:21.231 "state": "online", 00:22:21.231 "raid_level": "raid1", 00:22:21.231 "superblock": true, 00:22:21.231 "num_base_bdevs": 4, 00:22:21.231 "num_base_bdevs_discovered": 3, 00:22:21.232 "num_base_bdevs_operational": 3, 00:22:21.232 "base_bdevs_list": [ 00:22:21.232 { 00:22:21.232 "name": null, 00:22:21.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.232 "is_configured": false, 00:22:21.232 "data_offset": 2048, 00:22:21.232 "data_size": 63488 00:22:21.232 }, 00:22:21.232 { 00:22:21.232 "name": "BaseBdev2", 00:22:21.232 "uuid": "df242031-99af-5e6a-8117-0ef2f4e9b1d8", 00:22:21.232 "is_configured": true, 00:22:21.232 "data_offset": 2048, 00:22:21.232 "data_size": 63488 00:22:21.232 }, 00:22:21.232 { 00:22:21.232 "name": "BaseBdev3", 00:22:21.232 "uuid": "32939f8a-20d4-51e1-be75-fa9c3d2044f9", 00:22:21.232 "is_configured": true, 00:22:21.232 "data_offset": 2048, 00:22:21.232 "data_size": 63488 00:22:21.232 }, 00:22:21.232 { 00:22:21.232 "name": "BaseBdev4", 00:22:21.232 "uuid": "c2505fde-78b3-57c6-b967-b2bcb30644bf", 00:22:21.232 "is_configured": true, 00:22:21.232 "data_offset": 2048, 00:22:21.232 "data_size": 63488 00:22:21.232 } 00:22:21.232 ] 00:22:21.232 }' 00:22:21.232 17:01:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:21.232 17:01:09 -- common/autotest_common.sh@10 -- # set +x 00:22:21.797 17:01:10 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:21.797 17:01:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:21.797 17:01:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:21.797 17:01:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:21.797 17:01:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:21.797 17:01:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.797 17:01:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.055 17:01:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:22.055 "name": "raid_bdev1", 00:22:22.055 "uuid": "81b098a4-ae88-44eb-8e07-30d9788f3539", 00:22:22.055 "strip_size_kb": 0, 00:22:22.055 "state": "online", 00:22:22.055 "raid_level": "raid1", 00:22:22.055 "superblock": true, 00:22:22.055 "num_base_bdevs": 4, 00:22:22.055 "num_base_bdevs_discovered": 3, 00:22:22.055 "num_base_bdevs_operational": 3, 00:22:22.055 "base_bdevs_list": [ 00:22:22.055 { 00:22:22.055 "name": null, 00:22:22.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.055 "is_configured": false, 00:22:22.055 "data_offset": 2048, 00:22:22.055 "data_size": 63488 00:22:22.055 }, 00:22:22.055 { 00:22:22.055 "name": "BaseBdev2", 00:22:22.055 "uuid": "df242031-99af-5e6a-8117-0ef2f4e9b1d8", 00:22:22.055 "is_configured": true, 00:22:22.055 "data_offset": 2048, 00:22:22.055 "data_size": 63488 00:22:22.055 }, 00:22:22.055 { 00:22:22.055 "name": "BaseBdev3", 00:22:22.055 "uuid": "32939f8a-20d4-51e1-be75-fa9c3d2044f9", 00:22:22.055 "is_configured": true, 00:22:22.055 "data_offset": 2048, 00:22:22.055 "data_size": 63488 00:22:22.055 }, 00:22:22.055 { 00:22:22.055 "name": "BaseBdev4", 00:22:22.055 "uuid": "c2505fde-78b3-57c6-b967-b2bcb30644bf", 00:22:22.055 "is_configured": true, 00:22:22.055 "data_offset": 2048, 00:22:22.055 "data_size": 63488 00:22:22.055 } 00:22:22.055 ] 00:22:22.055 }' 00:22:22.055 17:01:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:22.055 17:01:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:22.055 17:01:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:22.055 17:01:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:22.055 17:01:10 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:22.313 [2024-11-05 17:01:11.147153] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:22.313 [2024-11-05 17:01:11.147377] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:22.313 [2024-11-05 17:01:11.157918] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:22:22.313 [2024-11-05 17:01:11.159937] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:22.313 17:01:11 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:23.686 17:01:12 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:23.686 17:01:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:23.686 17:01:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:23.686 17:01:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:23.687 17:01:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:23.687 17:01:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.687 17:01:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.687 17:01:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:23.687 "name": "raid_bdev1", 00:22:23.687 "uuid": "81b098a4-ae88-44eb-8e07-30d9788f3539", 00:22:23.687 "strip_size_kb": 0, 00:22:23.687 "state": "online", 00:22:23.687 "raid_level": "raid1", 00:22:23.687 "superblock": true, 00:22:23.687 "num_base_bdevs": 4, 00:22:23.687 "num_base_bdevs_discovered": 4, 00:22:23.687 "num_base_bdevs_operational": 4, 00:22:23.687 "process": { 00:22:23.687 "type": "rebuild", 00:22:23.687 "target": "spare", 00:22:23.687 "progress": { 00:22:23.687 "blocks": 22528, 00:22:23.687 "percent": 35 00:22:23.687 } 00:22:23.687 }, 00:22:23.687 "base_bdevs_list": [ 00:22:23.687 { 00:22:23.687 "name": "spare", 00:22:23.687 "uuid": "3ebab75f-443b-5ae3-9ac9-26d7aa9ccd3a", 00:22:23.687 "is_configured": true, 00:22:23.687 "data_offset": 2048, 00:22:23.687 "data_size": 63488 00:22:23.687 }, 00:22:23.687 { 00:22:23.687 "name": "BaseBdev2", 00:22:23.687 "uuid": "df242031-99af-5e6a-8117-0ef2f4e9b1d8", 00:22:23.687 "is_configured": true, 00:22:23.687 "data_offset": 2048, 00:22:23.687 "data_size": 63488 00:22:23.687 }, 00:22:23.687 { 00:22:23.687 "name": "BaseBdev3", 00:22:23.687 "uuid": "32939f8a-20d4-51e1-be75-fa9c3d2044f9", 00:22:23.687 "is_configured": true, 00:22:23.687 "data_offset": 2048, 00:22:23.687 "data_size": 63488 00:22:23.687 }, 00:22:23.687 { 00:22:23.687 "name": "BaseBdev4", 00:22:23.687 "uuid": "c2505fde-78b3-57c6-b967-b2bcb30644bf", 00:22:23.687 "is_configured": true, 00:22:23.687 "data_offset": 2048, 00:22:23.687 "data_size": 63488 00:22:23.687 } 00:22:23.687 ] 00:22:23.687 }' 00:22:23.687 17:01:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:23.687 17:01:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:23.687 17:01:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:23.687 17:01:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:23.687 17:01:12 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:23.687 17:01:12 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:23.687 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:23.687 17:01:12 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:23.687 17:01:12 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:23.687 17:01:12 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:23.687 17:01:12 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:23.944 [2024-11-05 17:01:12.658210] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:23.944 [2024-11-05 17:01:12.668638] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3360 00:22:23.944 17:01:12 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:23.944 17:01:12 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:23.944 17:01:12 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:23.944 17:01:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:23.944 17:01:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:23.944 17:01:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:23.944 17:01:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:23.944 17:01:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.944 17:01:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.202 17:01:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:24.202 "name": "raid_bdev1", 00:22:24.202 "uuid": "81b098a4-ae88-44eb-8e07-30d9788f3539", 00:22:24.202 "strip_size_kb": 0, 00:22:24.202 "state": "online", 00:22:24.202 "raid_level": "raid1", 00:22:24.202 "superblock": true, 00:22:24.202 "num_base_bdevs": 4, 00:22:24.203 "num_base_bdevs_discovered": 3, 00:22:24.203 "num_base_bdevs_operational": 3, 00:22:24.203 "process": { 00:22:24.203 "type": "rebuild", 00:22:24.203 "target": "spare", 00:22:24.203 "progress": { 00:22:24.203 "blocks": 36864, 00:22:24.203 "percent": 58 00:22:24.203 } 00:22:24.203 }, 00:22:24.203 "base_bdevs_list": [ 00:22:24.203 { 00:22:24.203 "name": "spare", 00:22:24.203 "uuid": "3ebab75f-443b-5ae3-9ac9-26d7aa9ccd3a", 00:22:24.203 "is_configured": true, 00:22:24.203 "data_offset": 2048, 00:22:24.203 "data_size": 63488 00:22:24.203 }, 00:22:24.203 { 00:22:24.203 "name": null, 00:22:24.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.203 "is_configured": false, 00:22:24.203 "data_offset": 2048, 00:22:24.203 "data_size": 63488 00:22:24.203 }, 00:22:24.203 { 00:22:24.203 "name": "BaseBdev3", 00:22:24.203 "uuid": "32939f8a-20d4-51e1-be75-fa9c3d2044f9", 00:22:24.203 "is_configured": true, 00:22:24.203 "data_offset": 2048, 00:22:24.203 "data_size": 63488 00:22:24.203 }, 00:22:24.203 { 00:22:24.203 "name": "BaseBdev4", 00:22:24.203 "uuid": "c2505fde-78b3-57c6-b967-b2bcb30644bf", 00:22:24.203 "is_configured": true, 00:22:24.203 "data_offset": 2048, 00:22:24.203 "data_size": 63488 00:22:24.203 } 00:22:24.203 ] 00:22:24.203 }' 00:22:24.203 17:01:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:24.203 17:01:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:24.203 17:01:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:24.203 17:01:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:24.203 17:01:13 -- bdev/bdev_raid.sh@657 -- # local timeout=513 00:22:24.203 17:01:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:24.203 17:01:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:24.203 17:01:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:24.203 17:01:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:24.203 17:01:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:24.203 17:01:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:24.203 17:01:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.203 17:01:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.461 17:01:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:24.461 "name": "raid_bdev1", 00:22:24.461 "uuid": "81b098a4-ae88-44eb-8e07-30d9788f3539", 00:22:24.461 "strip_size_kb": 0, 00:22:24.461 "state": "online", 00:22:24.461 "raid_level": "raid1", 00:22:24.461 "superblock": true, 00:22:24.461 "num_base_bdevs": 4, 00:22:24.461 "num_base_bdevs_discovered": 3, 00:22:24.461 "num_base_bdevs_operational": 3, 00:22:24.461 "process": { 00:22:24.461 "type": "rebuild", 00:22:24.461 "target": "spare", 00:22:24.461 "progress": { 00:22:24.461 "blocks": 40960, 00:22:24.461 "percent": 64 00:22:24.461 } 00:22:24.461 }, 00:22:24.461 "base_bdevs_list": [ 00:22:24.461 { 00:22:24.461 "name": "spare", 00:22:24.461 "uuid": "3ebab75f-443b-5ae3-9ac9-26d7aa9ccd3a", 00:22:24.461 "is_configured": true, 00:22:24.461 "data_offset": 2048, 00:22:24.461 "data_size": 63488 00:22:24.461 }, 00:22:24.461 { 00:22:24.461 "name": null, 00:22:24.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.461 "is_configured": false, 00:22:24.461 "data_offset": 2048, 00:22:24.461 "data_size": 63488 00:22:24.461 }, 00:22:24.461 { 00:22:24.461 "name": "BaseBdev3", 00:22:24.461 "uuid": "32939f8a-20d4-51e1-be75-fa9c3d2044f9", 00:22:24.461 "is_configured": true, 00:22:24.461 "data_offset": 2048, 00:22:24.461 "data_size": 63488 00:22:24.461 }, 00:22:24.461 { 00:22:24.461 "name": "BaseBdev4", 00:22:24.461 "uuid": "c2505fde-78b3-57c6-b967-b2bcb30644bf", 00:22:24.461 "is_configured": true, 00:22:24.461 "data_offset": 2048, 00:22:24.461 "data_size": 63488 00:22:24.461 } 00:22:24.461 ] 00:22:24.461 }' 00:22:24.461 17:01:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:24.461 17:01:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:24.461 17:01:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:24.719 17:01:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:24.719 17:01:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:25.653 [2024-11-05 17:01:14.278861] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:25.653 [2024-11-05 17:01:14.279085] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:25.653 [2024-11-05 17:01:14.279364] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.653 17:01:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:25.653 17:01:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:25.653 17:01:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:25.653 17:01:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:25.653 17:01:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:25.653 17:01:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:25.653 17:01:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.653 17:01:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.911 17:01:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:25.911 "name": "raid_bdev1", 00:22:25.911 "uuid": "81b098a4-ae88-44eb-8e07-30d9788f3539", 00:22:25.911 "strip_size_kb": 0, 00:22:25.911 "state": "online", 00:22:25.911 "raid_level": "raid1", 00:22:25.911 "superblock": true, 00:22:25.911 "num_base_bdevs": 4, 00:22:25.911 "num_base_bdevs_discovered": 3, 00:22:25.911 "num_base_bdevs_operational": 3, 00:22:25.911 "base_bdevs_list": [ 00:22:25.911 { 00:22:25.911 "name": "spare", 00:22:25.911 "uuid": "3ebab75f-443b-5ae3-9ac9-26d7aa9ccd3a", 00:22:25.912 "is_configured": true, 00:22:25.912 "data_offset": 2048, 00:22:25.912 "data_size": 63488 00:22:25.912 }, 00:22:25.912 { 00:22:25.912 "name": null, 00:22:25.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.912 "is_configured": false, 00:22:25.912 "data_offset": 2048, 00:22:25.912 "data_size": 63488 00:22:25.912 }, 00:22:25.912 { 00:22:25.912 "name": "BaseBdev3", 00:22:25.912 "uuid": "32939f8a-20d4-51e1-be75-fa9c3d2044f9", 00:22:25.912 "is_configured": true, 00:22:25.912 "data_offset": 2048, 00:22:25.912 "data_size": 63488 00:22:25.912 }, 00:22:25.912 { 00:22:25.912 "name": "BaseBdev4", 00:22:25.912 "uuid": "c2505fde-78b3-57c6-b967-b2bcb30644bf", 00:22:25.912 "is_configured": true, 00:22:25.912 "data_offset": 2048, 00:22:25.912 "data_size": 63488 00:22:25.912 } 00:22:25.912 ] 00:22:25.912 }' 00:22:25.912 17:01:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:25.912 17:01:14 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:25.912 17:01:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:25.912 17:01:14 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:25.912 17:01:14 -- bdev/bdev_raid.sh@660 -- # break 00:22:25.912 17:01:14 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:25.912 17:01:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:25.912 17:01:14 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:25.912 17:01:14 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:25.912 17:01:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:25.912 17:01:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.912 17:01:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.170 17:01:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:26.170 "name": "raid_bdev1", 00:22:26.170 "uuid": "81b098a4-ae88-44eb-8e07-30d9788f3539", 00:22:26.170 "strip_size_kb": 0, 00:22:26.170 "state": "online", 00:22:26.170 "raid_level": "raid1", 00:22:26.170 "superblock": true, 00:22:26.170 "num_base_bdevs": 4, 00:22:26.170 "num_base_bdevs_discovered": 3, 00:22:26.170 "num_base_bdevs_operational": 3, 00:22:26.170 "base_bdevs_list": [ 00:22:26.170 { 00:22:26.170 "name": "spare", 00:22:26.170 "uuid": "3ebab75f-443b-5ae3-9ac9-26d7aa9ccd3a", 00:22:26.170 "is_configured": true, 00:22:26.170 "data_offset": 2048, 00:22:26.170 "data_size": 63488 00:22:26.170 }, 00:22:26.170 { 00:22:26.170 "name": null, 00:22:26.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.170 "is_configured": false, 00:22:26.170 "data_offset": 2048, 00:22:26.170 "data_size": 63488 00:22:26.170 }, 00:22:26.170 { 00:22:26.170 "name": "BaseBdev3", 00:22:26.170 "uuid": "32939f8a-20d4-51e1-be75-fa9c3d2044f9", 00:22:26.170 "is_configured": true, 00:22:26.170 "data_offset": 2048, 00:22:26.170 "data_size": 63488 00:22:26.170 }, 00:22:26.170 { 00:22:26.170 "name": "BaseBdev4", 00:22:26.170 "uuid": "c2505fde-78b3-57c6-b967-b2bcb30644bf", 00:22:26.170 "is_configured": true, 00:22:26.170 "data_offset": 2048, 00:22:26.170 "data_size": 63488 00:22:26.170 } 00:22:26.170 ] 00:22:26.170 }' 00:22:26.170 17:01:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:26.170 17:01:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:26.170 17:01:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:26.170 17:01:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:26.170 17:01:15 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:26.170 17:01:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:26.170 17:01:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:26.170 17:01:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:26.428 17:01:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:26.428 17:01:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:26.428 17:01:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:26.428 17:01:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:26.428 17:01:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:26.428 17:01:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:26.428 17:01:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.428 17:01:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.686 17:01:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:26.686 "name": "raid_bdev1", 00:22:26.686 "uuid": "81b098a4-ae88-44eb-8e07-30d9788f3539", 00:22:26.686 "strip_size_kb": 0, 00:22:26.686 "state": "online", 00:22:26.686 "raid_level": "raid1", 00:22:26.686 "superblock": true, 00:22:26.686 "num_base_bdevs": 4, 00:22:26.686 "num_base_bdevs_discovered": 3, 00:22:26.686 "num_base_bdevs_operational": 3, 00:22:26.686 "base_bdevs_list": [ 00:22:26.686 { 00:22:26.686 "name": "spare", 00:22:26.686 "uuid": "3ebab75f-443b-5ae3-9ac9-26d7aa9ccd3a", 00:22:26.686 "is_configured": true, 00:22:26.686 "data_offset": 2048, 00:22:26.686 "data_size": 63488 00:22:26.686 }, 00:22:26.686 { 00:22:26.686 "name": null, 00:22:26.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.686 "is_configured": false, 00:22:26.686 "data_offset": 2048, 00:22:26.686 "data_size": 63488 00:22:26.686 }, 00:22:26.686 { 00:22:26.686 "name": "BaseBdev3", 00:22:26.686 "uuid": "32939f8a-20d4-51e1-be75-fa9c3d2044f9", 00:22:26.686 "is_configured": true, 00:22:26.686 "data_offset": 2048, 00:22:26.686 "data_size": 63488 00:22:26.686 }, 00:22:26.686 { 00:22:26.686 "name": "BaseBdev4", 00:22:26.686 "uuid": "c2505fde-78b3-57c6-b967-b2bcb30644bf", 00:22:26.686 "is_configured": true, 00:22:26.686 "data_offset": 2048, 00:22:26.686 "data_size": 63488 00:22:26.686 } 00:22:26.686 ] 00:22:26.686 }' 00:22:26.686 17:01:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:26.686 17:01:15 -- common/autotest_common.sh@10 -- # set +x 00:22:27.251 17:01:15 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:27.251 [2024-11-05 17:01:16.092076] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:27.251 [2024-11-05 17:01:16.092244] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:27.251 [2024-11-05 17:01:16.092462] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:27.251 [2024-11-05 17:01:16.092702] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:27.251 [2024-11-05 17:01:16.092837] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:22:27.251 17:01:16 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:27.251 17:01:16 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.509 17:01:16 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:27.509 17:01:16 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:27.509 17:01:16 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:27.509 17:01:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:27.509 17:01:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:27.509 17:01:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:27.509 17:01:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:27.509 17:01:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:27.509 17:01:16 -- bdev/nbd_common.sh@12 -- # local i 00:22:27.509 17:01:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:27.509 17:01:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:27.509 17:01:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:27.767 /dev/nbd0 00:22:27.767 17:01:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:27.767 17:01:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:27.767 17:01:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:27.767 17:01:16 -- common/autotest_common.sh@867 -- # local i 00:22:27.767 17:01:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:27.767 17:01:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:27.767 17:01:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:27.767 17:01:16 -- common/autotest_common.sh@871 -- # break 00:22:27.767 17:01:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:27.767 17:01:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:27.767 17:01:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:27.767 1+0 records in 00:22:27.767 1+0 records out 00:22:27.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490417 s, 8.4 MB/s 00:22:27.767 17:01:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:27.767 17:01:16 -- common/autotest_common.sh@884 -- # size=4096 00:22:27.767 17:01:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:27.767 17:01:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:27.767 17:01:16 -- common/autotest_common.sh@887 -- # return 0 00:22:27.767 17:01:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:27.767 17:01:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:27.767 17:01:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:28.024 /dev/nbd1 00:22:28.024 17:01:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:28.024 17:01:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:28.024 17:01:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:28.024 17:01:16 -- common/autotest_common.sh@867 -- # local i 00:22:28.024 17:01:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:28.024 17:01:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:28.024 17:01:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:28.024 17:01:16 -- common/autotest_common.sh@871 -- # break 00:22:28.024 17:01:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:28.024 17:01:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:28.024 17:01:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:28.024 1+0 records in 00:22:28.024 1+0 records out 00:22:28.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607421 s, 6.7 MB/s 00:22:28.024 17:01:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:28.024 17:01:16 -- common/autotest_common.sh@884 -- # size=4096 00:22:28.024 17:01:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:28.024 17:01:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:28.024 17:01:16 -- common/autotest_common.sh@887 -- # return 0 00:22:28.024 17:01:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:28.024 17:01:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:28.025 17:01:16 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:28.282 17:01:17 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:28.282 17:01:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:28.282 17:01:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:28.282 17:01:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:28.282 17:01:17 -- bdev/nbd_common.sh@51 -- # local i 00:22:28.282 17:01:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:28.282 17:01:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:28.540 17:01:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:28.540 17:01:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:28.540 17:01:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:28.540 17:01:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:28.540 17:01:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:28.540 17:01:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:28.540 17:01:17 -- bdev/nbd_common.sh@41 -- # break 00:22:28.540 17:01:17 -- bdev/nbd_common.sh@45 -- # return 0 00:22:28.540 17:01:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:28.540 17:01:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:28.798 17:01:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:28.798 17:01:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:28.798 17:01:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:28.798 17:01:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:28.798 17:01:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:28.798 17:01:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:28.798 17:01:17 -- bdev/nbd_common.sh@41 -- # break 00:22:28.798 17:01:17 -- bdev/nbd_common.sh@45 -- # return 0 00:22:28.798 17:01:17 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:28.798 17:01:17 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:28.798 17:01:17 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:28.798 17:01:17 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:29.056 17:01:17 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:29.314 [2024-11-05 17:01:18.003991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:29.314 [2024-11-05 17:01:18.004211] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.314 [2024-11-05 17:01:18.004299] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:29.314 [2024-11-05 17:01:18.004557] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.314 [2024-11-05 17:01:18.006979] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.314 [2024-11-05 17:01:18.007197] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:29.314 [2024-11-05 17:01:18.007438] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:29.314 [2024-11-05 17:01:18.007600] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:29.314 BaseBdev1 00:22:29.314 17:01:18 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:29.314 17:01:18 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:22:29.314 17:01:18 -- bdev/bdev_raid.sh@696 -- # continue 00:22:29.314 17:01:18 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:29.314 17:01:18 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:29.314 17:01:18 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:29.314 17:01:18 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:29.574 [2024-11-05 17:01:18.440088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:29.574 [2024-11-05 17:01:18.440339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.574 [2024-11-05 17:01:18.440421] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:29.574 [2024-11-05 17:01:18.440663] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.574 [2024-11-05 17:01:18.441198] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.574 [2024-11-05 17:01:18.441420] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:29.574 [2024-11-05 17:01:18.441652] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:29.574 [2024-11-05 17:01:18.441774] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:22:29.574 [2024-11-05 17:01:18.441887] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:29.574 [2024-11-05 17:01:18.441952] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:22:29.574 [2024-11-05 17:01:18.442230] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:29.574 BaseBdev3 00:22:29.574 17:01:18 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:29.574 17:01:18 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:22:29.574 17:01:18 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:29.849 17:01:18 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:30.118 [2024-11-05 17:01:18.824174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:30.118 [2024-11-05 17:01:18.824457] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.118 [2024-11-05 17:01:18.824669] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:22:30.118 [2024-11-05 17:01:18.824847] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.118 [2024-11-05 17:01:18.825418] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.118 [2024-11-05 17:01:18.825616] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:30.118 [2024-11-05 17:01:18.825832] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:30.118 [2024-11-05 17:01:18.825949] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:30.118 BaseBdev4 00:22:30.118 17:01:18 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:30.376 17:01:19 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:30.376 [2024-11-05 17:01:19.184193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:30.376 [2024-11-05 17:01:19.184396] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.376 [2024-11-05 17:01:19.184468] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:30.376 [2024-11-05 17:01:19.184717] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.376 [2024-11-05 17:01:19.185245] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.376 [2024-11-05 17:01:19.185453] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:30.376 [2024-11-05 17:01:19.185651] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:30.376 [2024-11-05 17:01:19.185791] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:30.376 spare 00:22:30.376 17:01:19 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:30.376 17:01:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:30.376 17:01:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:30.376 17:01:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:30.376 17:01:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:30.376 17:01:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:30.376 17:01:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:30.376 17:01:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:30.376 17:01:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:30.376 17:01:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:30.376 17:01:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.376 17:01:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.635 [2024-11-05 17:01:19.286026] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:22:30.635 [2024-11-05 17:01:19.286183] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:30.635 [2024-11-05 17:01:19.286330] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:22:30.635 [2024-11-05 17:01:19.286910] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:22:30.635 [2024-11-05 17:01:19.287046] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:22:30.635 [2024-11-05 17:01:19.287339] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:30.635 17:01:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:30.635 "name": "raid_bdev1", 00:22:30.635 "uuid": "81b098a4-ae88-44eb-8e07-30d9788f3539", 00:22:30.635 "strip_size_kb": 0, 00:22:30.635 "state": "online", 00:22:30.635 "raid_level": "raid1", 00:22:30.635 "superblock": true, 00:22:30.635 "num_base_bdevs": 4, 00:22:30.635 "num_base_bdevs_discovered": 3, 00:22:30.635 "num_base_bdevs_operational": 3, 00:22:30.635 "base_bdevs_list": [ 00:22:30.635 { 00:22:30.635 "name": "spare", 00:22:30.635 "uuid": "3ebab75f-443b-5ae3-9ac9-26d7aa9ccd3a", 00:22:30.635 "is_configured": true, 00:22:30.635 "data_offset": 2048, 00:22:30.635 "data_size": 63488 00:22:30.635 }, 00:22:30.635 { 00:22:30.635 "name": null, 00:22:30.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.635 "is_configured": false, 00:22:30.635 "data_offset": 2048, 00:22:30.635 "data_size": 63488 00:22:30.635 }, 00:22:30.635 { 00:22:30.635 "name": "BaseBdev3", 00:22:30.635 "uuid": "32939f8a-20d4-51e1-be75-fa9c3d2044f9", 00:22:30.635 "is_configured": true, 00:22:30.635 "data_offset": 2048, 00:22:30.635 "data_size": 63488 00:22:30.635 }, 00:22:30.635 { 00:22:30.635 "name": "BaseBdev4", 00:22:30.635 "uuid": "c2505fde-78b3-57c6-b967-b2bcb30644bf", 00:22:30.635 "is_configured": true, 00:22:30.635 "data_offset": 2048, 00:22:30.635 "data_size": 63488 00:22:30.635 } 00:22:30.635 ] 00:22:30.635 }' 00:22:30.635 17:01:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:30.635 17:01:19 -- common/autotest_common.sh@10 -- # set +x 00:22:31.200 17:01:20 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:31.200 17:01:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:31.200 17:01:20 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:31.200 17:01:20 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:31.200 17:01:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:31.200 17:01:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.200 17:01:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.458 17:01:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:31.458 "name": "raid_bdev1", 00:22:31.458 "uuid": "81b098a4-ae88-44eb-8e07-30d9788f3539", 00:22:31.458 "strip_size_kb": 0, 00:22:31.458 "state": "online", 00:22:31.458 "raid_level": "raid1", 00:22:31.458 "superblock": true, 00:22:31.458 "num_base_bdevs": 4, 00:22:31.458 "num_base_bdevs_discovered": 3, 00:22:31.458 "num_base_bdevs_operational": 3, 00:22:31.458 "base_bdevs_list": [ 00:22:31.458 { 00:22:31.458 "name": "spare", 00:22:31.458 "uuid": "3ebab75f-443b-5ae3-9ac9-26d7aa9ccd3a", 00:22:31.458 "is_configured": true, 00:22:31.458 "data_offset": 2048, 00:22:31.458 "data_size": 63488 00:22:31.458 }, 00:22:31.458 { 00:22:31.458 "name": null, 00:22:31.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.458 "is_configured": false, 00:22:31.458 "data_offset": 2048, 00:22:31.458 "data_size": 63488 00:22:31.458 }, 00:22:31.458 { 00:22:31.458 "name": "BaseBdev3", 00:22:31.458 "uuid": "32939f8a-20d4-51e1-be75-fa9c3d2044f9", 00:22:31.458 "is_configured": true, 00:22:31.458 "data_offset": 2048, 00:22:31.458 "data_size": 63488 00:22:31.458 }, 00:22:31.458 { 00:22:31.458 "name": "BaseBdev4", 00:22:31.458 "uuid": "c2505fde-78b3-57c6-b967-b2bcb30644bf", 00:22:31.458 "is_configured": true, 00:22:31.458 "data_offset": 2048, 00:22:31.458 "data_size": 63488 00:22:31.458 } 00:22:31.458 ] 00:22:31.458 }' 00:22:31.458 17:01:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:31.458 17:01:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:31.458 17:01:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:31.716 17:01:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:31.716 17:01:20 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.716 17:01:20 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:31.974 17:01:20 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.974 17:01:20 -- bdev/bdev_raid.sh@709 -- # killprocess 125209 00:22:31.974 17:01:20 -- common/autotest_common.sh@936 -- # '[' -z 125209 ']' 00:22:31.974 17:01:20 -- common/autotest_common.sh@940 -- # kill -0 125209 00:22:31.974 17:01:20 -- common/autotest_common.sh@941 -- # uname 00:22:31.974 17:01:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:31.974 17:01:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125209 00:22:31.974 killing process with pid 125209 00:22:31.974 Received shutdown signal, test time was about 60.000000 seconds 00:22:31.974 00:22:31.974 Latency(us) 00:22:31.974 [2024-11-05T17:01:20.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.974 [2024-11-05T17:01:20.852Z] =================================================================================================================== 00:22:31.975 [2024-11-05T17:01:20.852Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:31.975 17:01:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:31.975 17:01:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:31.975 17:01:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125209' 00:22:31.975 17:01:20 -- common/autotest_common.sh@955 -- # kill 125209 00:22:31.975 17:01:20 -- common/autotest_common.sh@960 -- # wait 125209 00:22:31.975 [2024-11-05 17:01:20.646515] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:31.975 [2024-11-05 17:01:20.646630] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:31.975 [2024-11-05 17:01:20.646731] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:31.975 [2024-11-05 17:01:20.646787] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:22:32.232 [2024-11-05 17:01:20.973397] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:33.167 ************************************ 00:22:33.167 END TEST raid_rebuild_test_sb 00:22:33.167 ************************************ 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:33.167 00:22:33.167 real 0m26.383s 00:22:33.167 user 0m38.168s 00:22:33.167 sys 0m3.886s 00:22:33.167 17:01:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:33.167 17:01:21 -- common/autotest_common.sh@10 -- # set +x 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:22:33.167 17:01:21 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:33.167 17:01:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:33.167 17:01:21 -- common/autotest_common.sh@10 -- # set +x 00:22:33.167 ************************************ 00:22:33.167 START TEST raid_rebuild_test_io 00:22:33.167 ************************************ 00:22:33.167 17:01:21 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false true 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@544 -- # raid_pid=125859 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@545 -- # waitforlisten 125859 /var/tmp/spdk-raid.sock 00:22:33.167 17:01:21 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:33.167 17:01:21 -- common/autotest_common.sh@829 -- # '[' -z 125859 ']' 00:22:33.167 17:01:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:33.167 17:01:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.167 17:01:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:33.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:33.167 17:01:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.167 17:01:21 -- common/autotest_common.sh@10 -- # set +x 00:22:33.167 [2024-11-05 17:01:22.018831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:33.167 [2024-11-05 17:01:22.019258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125859 ] 00:22:33.167 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:33.167 Zero copy mechanism will not be used. 00:22:33.425 [2024-11-05 17:01:22.183138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.684 [2024-11-05 17:01:22.347165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.684 [2024-11-05 17:01:22.513191] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:34.250 17:01:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.250 17:01:22 -- common/autotest_common.sh@862 -- # return 0 00:22:34.250 17:01:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:34.250 17:01:22 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:34.250 17:01:22 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:34.250 BaseBdev1 00:22:34.250 17:01:23 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:34.250 17:01:23 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:34.250 17:01:23 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:34.508 BaseBdev2 00:22:34.508 17:01:23 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:34.508 17:01:23 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:34.508 17:01:23 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:34.767 BaseBdev3 00:22:35.025 17:01:23 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:35.025 17:01:23 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:35.025 17:01:23 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:35.284 BaseBdev4 00:22:35.284 17:01:23 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:35.284 spare_malloc 00:22:35.284 17:01:24 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:35.543 spare_delay 00:22:35.543 17:01:24 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:35.801 [2024-11-05 17:01:24.589688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:35.801 [2024-11-05 17:01:24.589923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.801 [2024-11-05 17:01:24.590066] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:22:35.801 [2024-11-05 17:01:24.590220] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.801 [2024-11-05 17:01:24.592371] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.801 [2024-11-05 17:01:24.592577] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:35.801 spare 00:22:35.801 17:01:24 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:36.059 [2024-11-05 17:01:24.789742] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:36.059 [2024-11-05 17:01:24.791656] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:36.059 [2024-11-05 17:01:24.791834] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:36.059 [2024-11-05 17:01:24.791913] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:36.059 [2024-11-05 17:01:24.792125] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:22:36.059 [2024-11-05 17:01:24.792258] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:36.059 [2024-11-05 17:01:24.792407] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:36.059 [2024-11-05 17:01:24.792868] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:22:36.059 [2024-11-05 17:01:24.792998] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:22:36.059 [2024-11-05 17:01:24.793234] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:36.059 17:01:24 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:36.059 17:01:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:36.059 17:01:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:36.059 17:01:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:36.059 17:01:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:36.059 17:01:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:36.059 17:01:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:36.059 17:01:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:36.059 17:01:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:36.059 17:01:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:36.059 17:01:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.059 17:01:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.317 17:01:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:36.317 "name": "raid_bdev1", 00:22:36.317 "uuid": "459ac072-ddca-4230-8011-de5e0c876b3a", 00:22:36.317 "strip_size_kb": 0, 00:22:36.317 "state": "online", 00:22:36.317 "raid_level": "raid1", 00:22:36.317 "superblock": false, 00:22:36.317 "num_base_bdevs": 4, 00:22:36.317 "num_base_bdevs_discovered": 4, 00:22:36.317 "num_base_bdevs_operational": 4, 00:22:36.317 "base_bdevs_list": [ 00:22:36.317 { 00:22:36.317 "name": "BaseBdev1", 00:22:36.317 "uuid": "556ccf0a-ed89-4054-9c5a-78695691ee64", 00:22:36.317 "is_configured": true, 00:22:36.317 "data_offset": 0, 00:22:36.317 "data_size": 65536 00:22:36.317 }, 00:22:36.317 { 00:22:36.317 "name": "BaseBdev2", 00:22:36.317 "uuid": "189a9329-b307-4b90-a582-e79642dbfc35", 00:22:36.317 "is_configured": true, 00:22:36.317 "data_offset": 0, 00:22:36.317 "data_size": 65536 00:22:36.317 }, 00:22:36.317 { 00:22:36.317 "name": "BaseBdev3", 00:22:36.317 "uuid": "d3287565-60a5-4db3-a140-c0d390ebe85c", 00:22:36.317 "is_configured": true, 00:22:36.317 "data_offset": 0, 00:22:36.317 "data_size": 65536 00:22:36.317 }, 00:22:36.317 { 00:22:36.317 "name": "BaseBdev4", 00:22:36.317 "uuid": "50973b12-715c-4f35-b53e-9ba2ce7c3b93", 00:22:36.317 "is_configured": true, 00:22:36.317 "data_offset": 0, 00:22:36.317 "data_size": 65536 00:22:36.317 } 00:22:36.317 ] 00:22:36.317 }' 00:22:36.317 17:01:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:36.317 17:01:24 -- common/autotest_common.sh@10 -- # set +x 00:22:36.883 17:01:25 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:36.883 17:01:25 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:37.141 [2024-11-05 17:01:25.874133] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:37.141 17:01:25 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:37.141 17:01:25 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:37.141 17:01:25 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.399 17:01:26 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:37.399 17:01:26 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:37.399 17:01:26 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:37.399 17:01:26 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:37.399 [2024-11-05 17:01:26.236413] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:22:37.399 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:37.399 Zero copy mechanism will not be used. 00:22:37.399 Running I/O for 60 seconds... 00:22:37.399 [2024-11-05 17:01:26.294709] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:37.399 [2024-11-05 17:01:26.295206] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:22:37.657 17:01:26 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:37.657 17:01:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:37.657 17:01:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:37.657 17:01:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:37.657 17:01:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:37.657 17:01:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:37.657 17:01:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:37.657 17:01:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:37.657 17:01:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:37.657 17:01:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:37.657 17:01:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.657 17:01:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.657 17:01:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:37.657 "name": "raid_bdev1", 00:22:37.657 "uuid": "459ac072-ddca-4230-8011-de5e0c876b3a", 00:22:37.657 "strip_size_kb": 0, 00:22:37.657 "state": "online", 00:22:37.657 "raid_level": "raid1", 00:22:37.657 "superblock": false, 00:22:37.657 "num_base_bdevs": 4, 00:22:37.657 "num_base_bdevs_discovered": 3, 00:22:37.657 "num_base_bdevs_operational": 3, 00:22:37.657 "base_bdevs_list": [ 00:22:37.657 { 00:22:37.657 "name": null, 00:22:37.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.657 "is_configured": false, 00:22:37.657 "data_offset": 0, 00:22:37.657 "data_size": 65536 00:22:37.657 }, 00:22:37.657 { 00:22:37.657 "name": "BaseBdev2", 00:22:37.657 "uuid": "189a9329-b307-4b90-a582-e79642dbfc35", 00:22:37.657 "is_configured": true, 00:22:37.657 "data_offset": 0, 00:22:37.658 "data_size": 65536 00:22:37.658 }, 00:22:37.658 { 00:22:37.658 "name": "BaseBdev3", 00:22:37.658 "uuid": "d3287565-60a5-4db3-a140-c0d390ebe85c", 00:22:37.658 "is_configured": true, 00:22:37.658 "data_offset": 0, 00:22:37.658 "data_size": 65536 00:22:37.658 }, 00:22:37.658 { 00:22:37.658 "name": "BaseBdev4", 00:22:37.658 "uuid": "50973b12-715c-4f35-b53e-9ba2ce7c3b93", 00:22:37.658 "is_configured": true, 00:22:37.658 "data_offset": 0, 00:22:37.658 "data_size": 65536 00:22:37.658 } 00:22:37.658 ] 00:22:37.658 }' 00:22:37.658 17:01:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:37.658 17:01:26 -- common/autotest_common.sh@10 -- # set +x 00:22:38.593 17:01:27 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:38.593 [2024-11-05 17:01:27.294451] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:38.593 [2024-11-05 17:01:27.294808] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:38.593 17:01:27 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:38.593 [2024-11-05 17:01:27.343760] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:38.593 [2024-11-05 17:01:27.345909] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:38.593 [2024-11-05 17:01:27.454930] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:38.593 [2024-11-05 17:01:27.455775] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:38.851 [2024-11-05 17:01:27.558503] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:38.851 [2024-11-05 17:01:27.558776] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:39.108 [2024-11-05 17:01:27.827134] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:39.108 [2024-11-05 17:01:27.827858] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:39.366 [2024-11-05 17:01:28.044544] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:39.366 [2024-11-05 17:01:28.045184] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:39.624 17:01:28 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:39.624 17:01:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:39.624 17:01:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:39.624 17:01:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:39.624 17:01:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:39.624 17:01:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.624 17:01:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.624 [2024-11-05 17:01:28.390190] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:39.624 [2024-11-05 17:01:28.390837] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:39.882 17:01:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:39.882 "name": "raid_bdev1", 00:22:39.882 "uuid": "459ac072-ddca-4230-8011-de5e0c876b3a", 00:22:39.882 "strip_size_kb": 0, 00:22:39.882 "state": "online", 00:22:39.882 "raid_level": "raid1", 00:22:39.882 "superblock": false, 00:22:39.882 "num_base_bdevs": 4, 00:22:39.882 "num_base_bdevs_discovered": 4, 00:22:39.882 "num_base_bdevs_operational": 4, 00:22:39.882 "process": { 00:22:39.882 "type": "rebuild", 00:22:39.882 "target": "spare", 00:22:39.882 "progress": { 00:22:39.882 "blocks": 14336, 00:22:39.882 "percent": 21 00:22:39.882 } 00:22:39.882 }, 00:22:39.882 "base_bdevs_list": [ 00:22:39.882 { 00:22:39.882 "name": "spare", 00:22:39.882 "uuid": "080d6d6d-c468-5689-a6f6-5ebcc4e6fb90", 00:22:39.882 "is_configured": true, 00:22:39.882 "data_offset": 0, 00:22:39.882 "data_size": 65536 00:22:39.882 }, 00:22:39.882 { 00:22:39.882 "name": "BaseBdev2", 00:22:39.882 "uuid": "189a9329-b307-4b90-a582-e79642dbfc35", 00:22:39.882 "is_configured": true, 00:22:39.882 "data_offset": 0, 00:22:39.882 "data_size": 65536 00:22:39.882 }, 00:22:39.882 { 00:22:39.882 "name": "BaseBdev3", 00:22:39.882 "uuid": "d3287565-60a5-4db3-a140-c0d390ebe85c", 00:22:39.882 "is_configured": true, 00:22:39.882 "data_offset": 0, 00:22:39.882 "data_size": 65536 00:22:39.882 }, 00:22:39.882 { 00:22:39.882 "name": "BaseBdev4", 00:22:39.882 "uuid": "50973b12-715c-4f35-b53e-9ba2ce7c3b93", 00:22:39.882 "is_configured": true, 00:22:39.882 "data_offset": 0, 00:22:39.882 "data_size": 65536 00:22:39.882 } 00:22:39.882 ] 00:22:39.882 }' 00:22:39.882 17:01:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:39.882 17:01:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:39.882 17:01:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:39.882 17:01:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:39.882 17:01:28 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:39.882 [2024-11-05 17:01:28.756245] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:40.141 [2024-11-05 17:01:28.829402] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:40.141 [2024-11-05 17:01:28.879644] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:40.141 [2024-11-05 17:01:28.985431] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:40.141 [2024-11-05 17:01:28.988621] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.141 [2024-11-05 17:01:29.019947] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:22:40.399 17:01:29 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:40.399 17:01:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:40.399 17:01:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:40.399 17:01:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:40.399 17:01:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:40.399 17:01:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:40.399 17:01:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:40.399 17:01:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:40.399 17:01:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:40.399 17:01:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:40.399 17:01:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.399 17:01:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.657 17:01:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:40.657 "name": "raid_bdev1", 00:22:40.657 "uuid": "459ac072-ddca-4230-8011-de5e0c876b3a", 00:22:40.657 "strip_size_kb": 0, 00:22:40.657 "state": "online", 00:22:40.657 "raid_level": "raid1", 00:22:40.657 "superblock": false, 00:22:40.657 "num_base_bdevs": 4, 00:22:40.657 "num_base_bdevs_discovered": 3, 00:22:40.657 "num_base_bdevs_operational": 3, 00:22:40.657 "base_bdevs_list": [ 00:22:40.657 { 00:22:40.657 "name": null, 00:22:40.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.657 "is_configured": false, 00:22:40.657 "data_offset": 0, 00:22:40.657 "data_size": 65536 00:22:40.657 }, 00:22:40.657 { 00:22:40.657 "name": "BaseBdev2", 00:22:40.657 "uuid": "189a9329-b307-4b90-a582-e79642dbfc35", 00:22:40.657 "is_configured": true, 00:22:40.657 "data_offset": 0, 00:22:40.657 "data_size": 65536 00:22:40.657 }, 00:22:40.657 { 00:22:40.657 "name": "BaseBdev3", 00:22:40.657 "uuid": "d3287565-60a5-4db3-a140-c0d390ebe85c", 00:22:40.657 "is_configured": true, 00:22:40.657 "data_offset": 0, 00:22:40.657 "data_size": 65536 00:22:40.657 }, 00:22:40.657 { 00:22:40.657 "name": "BaseBdev4", 00:22:40.657 "uuid": "50973b12-715c-4f35-b53e-9ba2ce7c3b93", 00:22:40.657 "is_configured": true, 00:22:40.657 "data_offset": 0, 00:22:40.657 "data_size": 65536 00:22:40.657 } 00:22:40.657 ] 00:22:40.657 }' 00:22:40.657 17:01:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:40.657 17:01:29 -- common/autotest_common.sh@10 -- # set +x 00:22:41.231 17:01:30 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:41.231 17:01:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:41.231 17:01:30 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:41.231 17:01:30 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:41.231 17:01:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:41.231 17:01:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.231 17:01:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.489 17:01:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:41.489 "name": "raid_bdev1", 00:22:41.489 "uuid": "459ac072-ddca-4230-8011-de5e0c876b3a", 00:22:41.489 "strip_size_kb": 0, 00:22:41.489 "state": "online", 00:22:41.489 "raid_level": "raid1", 00:22:41.489 "superblock": false, 00:22:41.489 "num_base_bdevs": 4, 00:22:41.489 "num_base_bdevs_discovered": 3, 00:22:41.489 "num_base_bdevs_operational": 3, 00:22:41.489 "base_bdevs_list": [ 00:22:41.489 { 00:22:41.489 "name": null, 00:22:41.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.489 "is_configured": false, 00:22:41.489 "data_offset": 0, 00:22:41.489 "data_size": 65536 00:22:41.489 }, 00:22:41.489 { 00:22:41.489 "name": "BaseBdev2", 00:22:41.489 "uuid": "189a9329-b307-4b90-a582-e79642dbfc35", 00:22:41.489 "is_configured": true, 00:22:41.489 "data_offset": 0, 00:22:41.489 "data_size": 65536 00:22:41.489 }, 00:22:41.489 { 00:22:41.489 "name": "BaseBdev3", 00:22:41.489 "uuid": "d3287565-60a5-4db3-a140-c0d390ebe85c", 00:22:41.489 "is_configured": true, 00:22:41.489 "data_offset": 0, 00:22:41.489 "data_size": 65536 00:22:41.489 }, 00:22:41.489 { 00:22:41.489 "name": "BaseBdev4", 00:22:41.489 "uuid": "50973b12-715c-4f35-b53e-9ba2ce7c3b93", 00:22:41.489 "is_configured": true, 00:22:41.489 "data_offset": 0, 00:22:41.489 "data_size": 65536 00:22:41.489 } 00:22:41.489 ] 00:22:41.489 }' 00:22:41.489 17:01:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:41.489 17:01:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:41.489 17:01:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:41.489 17:01:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:41.489 17:01:30 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:41.747 [2024-11-05 17:01:30.597514] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:41.747 [2024-11-05 17:01:30.597872] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:41.747 17:01:30 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:41.747 [2024-11-05 17:01:30.633249] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:41.747 [2024-11-05 17:01:30.635289] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:42.005 [2024-11-05 17:01:30.762056] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:42.262 [2024-11-05 17:01:30.993925] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:42.263 [2024-11-05 17:01:30.995009] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:42.829 [2024-11-05 17:01:31.442612] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:42.829 [2024-11-05 17:01:31.443633] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:42.829 17:01:31 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:42.829 17:01:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:42.829 17:01:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:42.829 17:01:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:42.829 17:01:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:42.829 17:01:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.829 17:01:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.087 17:01:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:43.087 "name": "raid_bdev1", 00:22:43.087 "uuid": "459ac072-ddca-4230-8011-de5e0c876b3a", 00:22:43.087 "strip_size_kb": 0, 00:22:43.087 "state": "online", 00:22:43.087 "raid_level": "raid1", 00:22:43.087 "superblock": false, 00:22:43.087 "num_base_bdevs": 4, 00:22:43.087 "num_base_bdevs_discovered": 4, 00:22:43.087 "num_base_bdevs_operational": 4, 00:22:43.087 "process": { 00:22:43.087 "type": "rebuild", 00:22:43.087 "target": "spare", 00:22:43.087 "progress": { 00:22:43.087 "blocks": 14336, 00:22:43.087 "percent": 21 00:22:43.087 } 00:22:43.087 }, 00:22:43.087 "base_bdevs_list": [ 00:22:43.087 { 00:22:43.087 "name": "spare", 00:22:43.087 "uuid": "080d6d6d-c468-5689-a6f6-5ebcc4e6fb90", 00:22:43.087 "is_configured": true, 00:22:43.087 "data_offset": 0, 00:22:43.087 "data_size": 65536 00:22:43.087 }, 00:22:43.087 { 00:22:43.087 "name": "BaseBdev2", 00:22:43.087 "uuid": "189a9329-b307-4b90-a582-e79642dbfc35", 00:22:43.087 "is_configured": true, 00:22:43.087 "data_offset": 0, 00:22:43.087 "data_size": 65536 00:22:43.087 }, 00:22:43.087 { 00:22:43.087 "name": "BaseBdev3", 00:22:43.087 "uuid": "d3287565-60a5-4db3-a140-c0d390ebe85c", 00:22:43.087 "is_configured": true, 00:22:43.087 "data_offset": 0, 00:22:43.087 "data_size": 65536 00:22:43.087 }, 00:22:43.087 { 00:22:43.087 "name": "BaseBdev4", 00:22:43.087 "uuid": "50973b12-715c-4f35-b53e-9ba2ce7c3b93", 00:22:43.087 "is_configured": true, 00:22:43.087 "data_offset": 0, 00:22:43.087 "data_size": 65536 00:22:43.087 } 00:22:43.087 ] 00:22:43.087 }' 00:22:43.087 17:01:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:43.087 17:01:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:43.087 17:01:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:43.087 17:01:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:43.087 17:01:31 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:43.087 17:01:31 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:43.087 17:01:31 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:43.087 17:01:31 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:43.087 17:01:31 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:43.346 [2024-11-05 17:01:32.215572] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:43.604 [2024-11-05 17:01:32.376325] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005a00 00:22:43.604 [2024-11-05 17:01:32.376580] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005c70 00:22:43.604 17:01:32 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:43.604 17:01:32 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:43.604 17:01:32 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:43.604 17:01:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:43.604 17:01:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:43.604 17:01:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:43.604 17:01:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:43.604 17:01:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.604 17:01:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.863 17:01:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:43.863 "name": "raid_bdev1", 00:22:43.863 "uuid": "459ac072-ddca-4230-8011-de5e0c876b3a", 00:22:43.863 "strip_size_kb": 0, 00:22:43.863 "state": "online", 00:22:43.863 "raid_level": "raid1", 00:22:43.863 "superblock": false, 00:22:43.863 "num_base_bdevs": 4, 00:22:43.863 "num_base_bdevs_discovered": 3, 00:22:43.863 "num_base_bdevs_operational": 3, 00:22:43.863 "process": { 00:22:43.863 "type": "rebuild", 00:22:43.863 "target": "spare", 00:22:43.863 "progress": { 00:22:43.863 "blocks": 24576, 00:22:43.863 "percent": 37 00:22:43.863 } 00:22:43.863 }, 00:22:43.863 "base_bdevs_list": [ 00:22:43.863 { 00:22:43.863 "name": "spare", 00:22:43.863 "uuid": "080d6d6d-c468-5689-a6f6-5ebcc4e6fb90", 00:22:43.863 "is_configured": true, 00:22:43.863 "data_offset": 0, 00:22:43.863 "data_size": 65536 00:22:43.863 }, 00:22:43.863 { 00:22:43.863 "name": null, 00:22:43.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.863 "is_configured": false, 00:22:43.863 "data_offset": 0, 00:22:43.863 "data_size": 65536 00:22:43.863 }, 00:22:43.863 { 00:22:43.863 "name": "BaseBdev3", 00:22:43.863 "uuid": "d3287565-60a5-4db3-a140-c0d390ebe85c", 00:22:43.863 "is_configured": true, 00:22:43.863 "data_offset": 0, 00:22:43.863 "data_size": 65536 00:22:43.863 }, 00:22:43.863 { 00:22:43.863 "name": "BaseBdev4", 00:22:43.863 "uuid": "50973b12-715c-4f35-b53e-9ba2ce7c3b93", 00:22:43.863 "is_configured": true, 00:22:43.863 "data_offset": 0, 00:22:43.863 "data_size": 65536 00:22:43.863 } 00:22:43.863 ] 00:22:43.863 }' 00:22:43.863 17:01:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:43.863 17:01:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:43.863 17:01:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:43.863 17:01:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:43.863 17:01:32 -- bdev/bdev_raid.sh@657 -- # local timeout=532 00:22:43.863 17:01:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:43.863 17:01:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:43.863 17:01:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:43.863 17:01:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:43.863 17:01:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:43.863 17:01:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:43.863 17:01:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.863 17:01:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.863 [2024-11-05 17:01:32.740909] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:44.121 17:01:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:44.121 "name": "raid_bdev1", 00:22:44.121 "uuid": "459ac072-ddca-4230-8011-de5e0c876b3a", 00:22:44.121 "strip_size_kb": 0, 00:22:44.121 "state": "online", 00:22:44.121 "raid_level": "raid1", 00:22:44.121 "superblock": false, 00:22:44.121 "num_base_bdevs": 4, 00:22:44.121 "num_base_bdevs_discovered": 3, 00:22:44.121 "num_base_bdevs_operational": 3, 00:22:44.121 "process": { 00:22:44.121 "type": "rebuild", 00:22:44.121 "target": "spare", 00:22:44.121 "progress": { 00:22:44.121 "blocks": 30720, 00:22:44.121 "percent": 46 00:22:44.121 } 00:22:44.121 }, 00:22:44.121 "base_bdevs_list": [ 00:22:44.121 { 00:22:44.121 "name": "spare", 00:22:44.121 "uuid": "080d6d6d-c468-5689-a6f6-5ebcc4e6fb90", 00:22:44.121 "is_configured": true, 00:22:44.121 "data_offset": 0, 00:22:44.121 "data_size": 65536 00:22:44.121 }, 00:22:44.121 { 00:22:44.121 "name": null, 00:22:44.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.122 "is_configured": false, 00:22:44.122 "data_offset": 0, 00:22:44.122 "data_size": 65536 00:22:44.122 }, 00:22:44.122 { 00:22:44.122 "name": "BaseBdev3", 00:22:44.122 "uuid": "d3287565-60a5-4db3-a140-c0d390ebe85c", 00:22:44.122 "is_configured": true, 00:22:44.122 "data_offset": 0, 00:22:44.122 "data_size": 65536 00:22:44.122 }, 00:22:44.122 { 00:22:44.122 "name": "BaseBdev4", 00:22:44.122 "uuid": "50973b12-715c-4f35-b53e-9ba2ce7c3b93", 00:22:44.122 "is_configured": true, 00:22:44.122 "data_offset": 0, 00:22:44.122 "data_size": 65536 00:22:44.122 } 00:22:44.122 ] 00:22:44.122 }' 00:22:44.122 17:01:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:44.122 [2024-11-05 17:01:32.986838] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:44.122 17:01:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:44.122 17:01:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:44.380 17:01:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:44.380 17:01:33 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:44.380 [2024-11-05 17:01:33.111180] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:44.946 [2024-11-05 17:01:33.573821] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:44.946 [2024-11-05 17:01:33.798040] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:22:45.204 [2024-11-05 17:01:33.913576] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:45.204 17:01:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:45.204 17:01:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:45.204 17:01:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:45.204 17:01:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:45.204 17:01:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:45.204 17:01:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:45.204 17:01:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.204 17:01:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.463 17:01:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:45.463 "name": "raid_bdev1", 00:22:45.463 "uuid": "459ac072-ddca-4230-8011-de5e0c876b3a", 00:22:45.463 "strip_size_kb": 0, 00:22:45.463 "state": "online", 00:22:45.463 "raid_level": "raid1", 00:22:45.463 "superblock": false, 00:22:45.463 "num_base_bdevs": 4, 00:22:45.463 "num_base_bdevs_discovered": 3, 00:22:45.463 "num_base_bdevs_operational": 3, 00:22:45.463 "process": { 00:22:45.463 "type": "rebuild", 00:22:45.463 "target": "spare", 00:22:45.463 "progress": { 00:22:45.463 "blocks": 51200, 00:22:45.463 "percent": 78 00:22:45.463 } 00:22:45.463 }, 00:22:45.463 "base_bdevs_list": [ 00:22:45.463 { 00:22:45.463 "name": "spare", 00:22:45.463 "uuid": "080d6d6d-c468-5689-a6f6-5ebcc4e6fb90", 00:22:45.463 "is_configured": true, 00:22:45.463 "data_offset": 0, 00:22:45.463 "data_size": 65536 00:22:45.463 }, 00:22:45.463 { 00:22:45.463 "name": null, 00:22:45.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.463 "is_configured": false, 00:22:45.463 "data_offset": 0, 00:22:45.463 "data_size": 65536 00:22:45.463 }, 00:22:45.463 { 00:22:45.463 "name": "BaseBdev3", 00:22:45.463 "uuid": "d3287565-60a5-4db3-a140-c0d390ebe85c", 00:22:45.463 "is_configured": true, 00:22:45.463 "data_offset": 0, 00:22:45.463 "data_size": 65536 00:22:45.463 }, 00:22:45.463 { 00:22:45.463 "name": "BaseBdev4", 00:22:45.463 "uuid": "50973b12-715c-4f35-b53e-9ba2ce7c3b93", 00:22:45.463 "is_configured": true, 00:22:45.463 "data_offset": 0, 00:22:45.463 "data_size": 65536 00:22:45.463 } 00:22:45.463 ] 00:22:45.463 }' 00:22:45.463 17:01:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:45.463 17:01:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:45.463 17:01:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:45.730 17:01:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:45.730 17:01:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:46.331 [2024-11-05 17:01:35.014765] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:46.331 [2024-11-05 17:01:35.120889] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:46.331 [2024-11-05 17:01:35.124912] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:46.589 17:01:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:46.589 17:01:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:46.589 17:01:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:46.589 17:01:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:46.589 17:01:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:46.589 17:01:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:46.589 17:01:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.589 17:01:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.848 17:01:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:46.848 "name": "raid_bdev1", 00:22:46.848 "uuid": "459ac072-ddca-4230-8011-de5e0c876b3a", 00:22:46.848 "strip_size_kb": 0, 00:22:46.848 "state": "online", 00:22:46.848 "raid_level": "raid1", 00:22:46.848 "superblock": false, 00:22:46.848 "num_base_bdevs": 4, 00:22:46.848 "num_base_bdevs_discovered": 3, 00:22:46.848 "num_base_bdevs_operational": 3, 00:22:46.848 "base_bdevs_list": [ 00:22:46.848 { 00:22:46.848 "name": "spare", 00:22:46.848 "uuid": "080d6d6d-c468-5689-a6f6-5ebcc4e6fb90", 00:22:46.848 "is_configured": true, 00:22:46.848 "data_offset": 0, 00:22:46.848 "data_size": 65536 00:22:46.848 }, 00:22:46.848 { 00:22:46.848 "name": null, 00:22:46.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.848 "is_configured": false, 00:22:46.848 "data_offset": 0, 00:22:46.848 "data_size": 65536 00:22:46.848 }, 00:22:46.848 { 00:22:46.848 "name": "BaseBdev3", 00:22:46.848 "uuid": "d3287565-60a5-4db3-a140-c0d390ebe85c", 00:22:46.848 "is_configured": true, 00:22:46.848 "data_offset": 0, 00:22:46.848 "data_size": 65536 00:22:46.848 }, 00:22:46.848 { 00:22:46.848 "name": "BaseBdev4", 00:22:46.848 "uuid": "50973b12-715c-4f35-b53e-9ba2ce7c3b93", 00:22:46.848 "is_configured": true, 00:22:46.848 "data_offset": 0, 00:22:46.848 "data_size": 65536 00:22:46.848 } 00:22:46.848 ] 00:22:46.848 }' 00:22:46.848 17:01:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:46.848 17:01:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:46.848 17:01:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:46.848 17:01:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:46.848 17:01:35 -- bdev/bdev_raid.sh@660 -- # break 00:22:46.848 17:01:35 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:46.848 17:01:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:46.848 17:01:35 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:46.848 17:01:35 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:46.848 17:01:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:46.848 17:01:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.848 17:01:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.106 17:01:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:47.106 "name": "raid_bdev1", 00:22:47.106 "uuid": "459ac072-ddca-4230-8011-de5e0c876b3a", 00:22:47.106 "strip_size_kb": 0, 00:22:47.106 "state": "online", 00:22:47.106 "raid_level": "raid1", 00:22:47.106 "superblock": false, 00:22:47.106 "num_base_bdevs": 4, 00:22:47.106 "num_base_bdevs_discovered": 3, 00:22:47.106 "num_base_bdevs_operational": 3, 00:22:47.106 "base_bdevs_list": [ 00:22:47.106 { 00:22:47.106 "name": "spare", 00:22:47.106 "uuid": "080d6d6d-c468-5689-a6f6-5ebcc4e6fb90", 00:22:47.106 "is_configured": true, 00:22:47.106 "data_offset": 0, 00:22:47.106 "data_size": 65536 00:22:47.106 }, 00:22:47.106 { 00:22:47.107 "name": null, 00:22:47.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.107 "is_configured": false, 00:22:47.107 "data_offset": 0, 00:22:47.107 "data_size": 65536 00:22:47.107 }, 00:22:47.107 { 00:22:47.107 "name": "BaseBdev3", 00:22:47.107 "uuid": "d3287565-60a5-4db3-a140-c0d390ebe85c", 00:22:47.107 "is_configured": true, 00:22:47.107 "data_offset": 0, 00:22:47.107 "data_size": 65536 00:22:47.107 }, 00:22:47.107 { 00:22:47.107 "name": "BaseBdev4", 00:22:47.107 "uuid": "50973b12-715c-4f35-b53e-9ba2ce7c3b93", 00:22:47.107 "is_configured": true, 00:22:47.107 "data_offset": 0, 00:22:47.107 "data_size": 65536 00:22:47.107 } 00:22:47.107 ] 00:22:47.107 }' 00:22:47.107 17:01:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:47.107 17:01:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:47.107 17:01:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:47.365 17:01:36 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:47.365 17:01:36 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:47.365 17:01:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:47.365 17:01:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:47.365 17:01:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:47.365 17:01:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:47.365 17:01:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:47.365 17:01:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:47.365 17:01:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:47.365 17:01:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:47.365 17:01:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:47.365 17:01:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.365 17:01:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.623 17:01:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:47.623 "name": "raid_bdev1", 00:22:47.623 "uuid": "459ac072-ddca-4230-8011-de5e0c876b3a", 00:22:47.623 "strip_size_kb": 0, 00:22:47.623 "state": "online", 00:22:47.623 "raid_level": "raid1", 00:22:47.623 "superblock": false, 00:22:47.623 "num_base_bdevs": 4, 00:22:47.623 "num_base_bdevs_discovered": 3, 00:22:47.623 "num_base_bdevs_operational": 3, 00:22:47.623 "base_bdevs_list": [ 00:22:47.623 { 00:22:47.623 "name": "spare", 00:22:47.623 "uuid": "080d6d6d-c468-5689-a6f6-5ebcc4e6fb90", 00:22:47.623 "is_configured": true, 00:22:47.623 "data_offset": 0, 00:22:47.623 "data_size": 65536 00:22:47.623 }, 00:22:47.623 { 00:22:47.623 "name": null, 00:22:47.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.623 "is_configured": false, 00:22:47.623 "data_offset": 0, 00:22:47.623 "data_size": 65536 00:22:47.623 }, 00:22:47.623 { 00:22:47.623 "name": "BaseBdev3", 00:22:47.623 "uuid": "d3287565-60a5-4db3-a140-c0d390ebe85c", 00:22:47.623 "is_configured": true, 00:22:47.623 "data_offset": 0, 00:22:47.623 "data_size": 65536 00:22:47.623 }, 00:22:47.623 { 00:22:47.623 "name": "BaseBdev4", 00:22:47.623 "uuid": "50973b12-715c-4f35-b53e-9ba2ce7c3b93", 00:22:47.623 "is_configured": true, 00:22:47.623 "data_offset": 0, 00:22:47.623 "data_size": 65536 00:22:47.623 } 00:22:47.623 ] 00:22:47.623 }' 00:22:47.623 17:01:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:47.623 17:01:36 -- common/autotest_common.sh@10 -- # set +x 00:22:48.188 17:01:36 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:48.447 [2024-11-05 17:01:37.216648] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:48.447 [2024-11-05 17:01:37.216954] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:48.447 00:22:48.447 Latency(us) 00:22:48.447 [2024-11-05T17:01:37.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.447 [2024-11-05T17:01:37.324Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:48.447 raid_bdev1 : 11.05 103.57 310.70 0.00 0.00 13397.19 288.58 124875.87 00:22:48.447 [2024-11-05T17:01:37.324Z] =================================================================================================================== 00:22:48.447 [2024-11-05T17:01:37.324Z] Total : 103.57 310.70 0.00 0.00 13397.19 288.58 124875.87 00:22:48.447 [2024-11-05 17:01:37.299497] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:48.447 [2024-11-05 17:01:37.299659] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:48.447 [2024-11-05 17:01:37.299778] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:48.447 0 00:22:48.447 [2024-11-05 17:01:37.299995] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:22:48.447 17:01:37 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.447 17:01:37 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:48.706 17:01:37 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:48.706 17:01:37 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:48.706 17:01:37 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:48.706 17:01:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:48.706 17:01:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:48.706 17:01:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:48.706 17:01:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:48.706 17:01:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:48.706 17:01:37 -- bdev/nbd_common.sh@12 -- # local i 00:22:48.706 17:01:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:48.706 17:01:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:48.706 17:01:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:48.965 /dev/nbd0 00:22:48.965 17:01:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:48.965 17:01:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:48.965 17:01:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:48.965 17:01:37 -- common/autotest_common.sh@867 -- # local i 00:22:48.965 17:01:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:48.965 17:01:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:48.965 17:01:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:48.965 17:01:37 -- common/autotest_common.sh@871 -- # break 00:22:48.965 17:01:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:48.965 17:01:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:48.965 17:01:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:48.965 1+0 records in 00:22:48.965 1+0 records out 00:22:48.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472615 s, 8.7 MB/s 00:22:48.965 17:01:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:48.965 17:01:37 -- common/autotest_common.sh@884 -- # size=4096 00:22:48.965 17:01:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:48.965 17:01:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:48.965 17:01:37 -- common/autotest_common.sh@887 -- # return 0 00:22:48.965 17:01:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:48.965 17:01:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:48.965 17:01:37 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:48.965 17:01:37 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:48.965 17:01:37 -- bdev/bdev_raid.sh@678 -- # continue 00:22:48.965 17:01:37 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:48.965 17:01:37 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:48.965 17:01:37 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:48.965 17:01:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:48.965 17:01:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:22:48.965 17:01:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:48.965 17:01:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:48.965 17:01:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:48.965 17:01:37 -- bdev/nbd_common.sh@12 -- # local i 00:22:48.965 17:01:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:48.965 17:01:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:48.965 17:01:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:49.224 /dev/nbd1 00:22:49.224 17:01:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:49.224 17:01:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:49.224 17:01:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:49.224 17:01:38 -- common/autotest_common.sh@867 -- # local i 00:22:49.224 17:01:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:49.224 17:01:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:49.224 17:01:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:49.224 17:01:38 -- common/autotest_common.sh@871 -- # break 00:22:49.224 17:01:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:49.224 17:01:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:49.224 17:01:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:49.224 1+0 records in 00:22:49.224 1+0 records out 00:22:49.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049808 s, 8.2 MB/s 00:22:49.224 17:01:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:49.224 17:01:38 -- common/autotest_common.sh@884 -- # size=4096 00:22:49.224 17:01:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:49.224 17:01:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:49.224 17:01:38 -- common/autotest_common.sh@887 -- # return 0 00:22:49.224 17:01:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:49.224 17:01:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:49.224 17:01:38 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:49.482 17:01:38 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:49.482 17:01:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:49.482 17:01:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:49.482 17:01:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:49.482 17:01:38 -- bdev/nbd_common.sh@51 -- # local i 00:22:49.482 17:01:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:49.482 17:01:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:49.741 17:01:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:49.741 17:01:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:49.741 17:01:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:49.741 17:01:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:49.741 17:01:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:49.741 17:01:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:49.741 17:01:38 -- bdev/nbd_common.sh@41 -- # break 00:22:49.741 17:01:38 -- bdev/nbd_common.sh@45 -- # return 0 00:22:49.741 17:01:38 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:49.741 17:01:38 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:49.741 17:01:38 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:49.741 17:01:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:49.741 17:01:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:22:49.741 17:01:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:49.741 17:01:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:49.741 17:01:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:49.741 17:01:38 -- bdev/nbd_common.sh@12 -- # local i 00:22:49.741 17:01:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:49.741 17:01:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:49.741 17:01:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:50.000 /dev/nbd1 00:22:50.000 17:01:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:50.000 17:01:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:50.000 17:01:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:50.000 17:01:38 -- common/autotest_common.sh@867 -- # local i 00:22:50.000 17:01:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:50.000 17:01:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:50.000 17:01:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:50.000 17:01:38 -- common/autotest_common.sh@871 -- # break 00:22:50.000 17:01:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:50.000 17:01:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:50.000 17:01:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:50.000 1+0 records in 00:22:50.000 1+0 records out 00:22:50.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511068 s, 8.0 MB/s 00:22:50.000 17:01:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:50.000 17:01:38 -- common/autotest_common.sh@884 -- # size=4096 00:22:50.000 17:01:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:50.000 17:01:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:50.000 17:01:38 -- common/autotest_common.sh@887 -- # return 0 00:22:50.000 17:01:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:50.000 17:01:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:50.000 17:01:38 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:50.000 17:01:38 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:50.000 17:01:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:50.000 17:01:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:50.000 17:01:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:50.000 17:01:38 -- bdev/nbd_common.sh@51 -- # local i 00:22:50.000 17:01:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:50.000 17:01:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:50.259 17:01:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:50.259 17:01:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:50.259 17:01:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:50.259 17:01:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:50.259 17:01:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:50.259 17:01:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:50.259 17:01:39 -- bdev/nbd_common.sh@41 -- # break 00:22:50.259 17:01:39 -- bdev/nbd_common.sh@45 -- # return 0 00:22:50.259 17:01:39 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:50.259 17:01:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:50.259 17:01:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:50.259 17:01:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:50.259 17:01:39 -- bdev/nbd_common.sh@51 -- # local i 00:22:50.259 17:01:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:50.259 17:01:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:50.826 17:01:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:50.826 17:01:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:50.826 17:01:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:50.826 17:01:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:50.826 17:01:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:50.826 17:01:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:50.826 17:01:39 -- bdev/nbd_common.sh@41 -- # break 00:22:50.826 17:01:39 -- bdev/nbd_common.sh@45 -- # return 0 00:22:50.826 17:01:39 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:50.826 17:01:39 -- bdev/bdev_raid.sh@709 -- # killprocess 125859 00:22:50.826 17:01:39 -- common/autotest_common.sh@936 -- # '[' -z 125859 ']' 00:22:50.826 17:01:39 -- common/autotest_common.sh@940 -- # kill -0 125859 00:22:50.827 17:01:39 -- common/autotest_common.sh@941 -- # uname 00:22:50.827 17:01:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:50.827 17:01:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125859 00:22:50.827 17:01:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:50.827 17:01:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:50.827 17:01:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125859' 00:22:50.827 killing process with pid 125859 00:22:50.827 17:01:39 -- common/autotest_common.sh@955 -- # kill 125859 00:22:50.827 Received shutdown signal, test time was about 13.209467 seconds 00:22:50.827 00:22:50.827 Latency(us) 00:22:50.827 [2024-11-05T17:01:39.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.827 [2024-11-05T17:01:39.704Z] =================================================================================================================== 00:22:50.827 [2024-11-05T17:01:39.704Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.827 17:01:39 -- common/autotest_common.sh@960 -- # wait 125859 00:22:50.827 [2024-11-05 17:01:39.448176] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:51.085 [2024-11-05 17:01:39.728051] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:52.021 00:22:52.021 real 0m18.761s 00:22:52.021 user 0m29.262s 00:22:52.021 sys 0m2.140s 00:22:52.021 17:01:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:52.021 ************************************ 00:22:52.021 END TEST raid_rebuild_test_io 00:22:52.021 ************************************ 00:22:52.021 17:01:40 -- common/autotest_common.sh@10 -- # set +x 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:22:52.021 17:01:40 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:52.021 17:01:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:52.021 17:01:40 -- common/autotest_common.sh@10 -- # set +x 00:22:52.021 ************************************ 00:22:52.021 START TEST raid_rebuild_test_sb_io 00:22:52.021 ************************************ 00:22:52.021 17:01:40 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true true 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@544 -- # raid_pid=126371 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@545 -- # waitforlisten 126371 /var/tmp/spdk-raid.sock 00:22:52.021 17:01:40 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:52.022 17:01:40 -- common/autotest_common.sh@829 -- # '[' -z 126371 ']' 00:22:52.022 17:01:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:52.022 17:01:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.022 17:01:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:52.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:52.022 17:01:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.022 17:01:40 -- common/autotest_common.sh@10 -- # set +x 00:22:52.022 [2024-11-05 17:01:40.838058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:52.022 [2024-11-05 17:01:40.838488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126371 ] 00:22:52.022 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:52.022 Zero copy mechanism will not be used. 00:22:52.280 [2024-11-05 17:01:41.008377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.538 [2024-11-05 17:01:41.184965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.538 [2024-11-05 17:01:41.349044] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:53.105 17:01:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:53.105 17:01:41 -- common/autotest_common.sh@862 -- # return 0 00:22:53.105 17:01:41 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:53.105 17:01:41 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:53.105 17:01:41 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:53.364 BaseBdev1_malloc 00:22:53.364 17:01:42 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:53.364 [2024-11-05 17:01:42.255934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:53.364 [2024-11-05 17:01:42.256188] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.364 [2024-11-05 17:01:42.256260] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:22:53.364 [2024-11-05 17:01:42.256542] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.364 [2024-11-05 17:01:42.258843] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.364 [2024-11-05 17:01:42.259069] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:53.622 BaseBdev1 00:22:53.622 17:01:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:53.622 17:01:42 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:53.622 17:01:42 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:53.881 BaseBdev2_malloc 00:22:53.881 17:01:42 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:53.881 [2024-11-05 17:01:42.729802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:53.881 [2024-11-05 17:01:42.730016] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.881 [2024-11-05 17:01:42.730097] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:53.881 [2024-11-05 17:01:42.730315] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.881 [2024-11-05 17:01:42.732619] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.881 [2024-11-05 17:01:42.732801] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:53.881 BaseBdev2 00:22:53.881 17:01:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:53.881 17:01:42 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:53.881 17:01:42 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:54.139 BaseBdev3_malloc 00:22:54.139 17:01:42 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:54.398 [2024-11-05 17:01:43.138894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:54.398 [2024-11-05 17:01:43.139094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:54.398 [2024-11-05 17:01:43.139169] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:22:54.398 [2024-11-05 17:01:43.139440] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:54.398 [2024-11-05 17:01:43.141619] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:54.398 [2024-11-05 17:01:43.141798] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:54.398 BaseBdev3 00:22:54.398 17:01:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:54.398 17:01:43 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:54.398 17:01:43 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:54.657 BaseBdev4_malloc 00:22:54.657 17:01:43 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:54.915 [2024-11-05 17:01:43.611907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:54.915 [2024-11-05 17:01:43.612101] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:54.915 [2024-11-05 17:01:43.612170] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:22:54.915 [2024-11-05 17:01:43.612448] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:54.915 [2024-11-05 17:01:43.614686] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:54.915 [2024-11-05 17:01:43.614860] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:54.915 BaseBdev4 00:22:54.915 17:01:43 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:55.174 spare_malloc 00:22:55.174 17:01:43 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:55.174 spare_delay 00:22:55.174 17:01:44 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:55.434 [2024-11-05 17:01:44.201901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:55.434 [2024-11-05 17:01:44.202145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.434 [2024-11-05 17:01:44.202218] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:55.434 [2024-11-05 17:01:44.202499] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.434 [2024-11-05 17:01:44.204965] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.434 [2024-11-05 17:01:44.205151] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:55.434 spare 00:22:55.434 17:01:44 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:55.693 [2024-11-05 17:01:44.450023] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:55.693 [2024-11-05 17:01:44.452104] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:55.693 [2024-11-05 17:01:44.452314] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:55.693 [2024-11-05 17:01:44.452414] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:55.693 [2024-11-05 17:01:44.452668] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:22:55.693 [2024-11-05 17:01:44.452779] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:55.693 [2024-11-05 17:01:44.452930] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:55.693 [2024-11-05 17:01:44.453419] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:22:55.693 [2024-11-05 17:01:44.453549] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:22:55.693 [2024-11-05 17:01:44.453773] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.693 17:01:44 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:55.693 17:01:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:55.693 17:01:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:55.693 17:01:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:55.693 17:01:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:55.693 17:01:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:55.693 17:01:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:55.693 17:01:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:55.693 17:01:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:55.693 17:01:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:55.693 17:01:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.693 17:01:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.951 17:01:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:55.952 "name": "raid_bdev1", 00:22:55.952 "uuid": "78318fa5-79a8-4dd3-a2ea-1e80f3ba4b0e", 00:22:55.952 "strip_size_kb": 0, 00:22:55.952 "state": "online", 00:22:55.952 "raid_level": "raid1", 00:22:55.952 "superblock": true, 00:22:55.952 "num_base_bdevs": 4, 00:22:55.952 "num_base_bdevs_discovered": 4, 00:22:55.952 "num_base_bdevs_operational": 4, 00:22:55.952 "base_bdevs_list": [ 00:22:55.952 { 00:22:55.952 "name": "BaseBdev1", 00:22:55.952 "uuid": "36af87e1-90da-5d34-a551-e7add13e81db", 00:22:55.952 "is_configured": true, 00:22:55.952 "data_offset": 2048, 00:22:55.952 "data_size": 63488 00:22:55.952 }, 00:22:55.952 { 00:22:55.952 "name": "BaseBdev2", 00:22:55.952 "uuid": "6af47b06-7057-5464-8d43-9cfac17ad701", 00:22:55.952 "is_configured": true, 00:22:55.952 "data_offset": 2048, 00:22:55.952 "data_size": 63488 00:22:55.952 }, 00:22:55.952 { 00:22:55.952 "name": "BaseBdev3", 00:22:55.952 "uuid": "51b08255-25f5-5a53-a4ca-050ceaeeb65b", 00:22:55.952 "is_configured": true, 00:22:55.952 "data_offset": 2048, 00:22:55.952 "data_size": 63488 00:22:55.952 }, 00:22:55.952 { 00:22:55.952 "name": "BaseBdev4", 00:22:55.952 "uuid": "a601c332-baa8-522d-8b2a-7a391a3d5aff", 00:22:55.952 "is_configured": true, 00:22:55.952 "data_offset": 2048, 00:22:55.952 "data_size": 63488 00:22:55.952 } 00:22:55.952 ] 00:22:55.952 }' 00:22:55.952 17:01:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:55.952 17:01:44 -- common/autotest_common.sh@10 -- # set +x 00:22:56.519 17:01:45 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:56.519 17:01:45 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:56.519 [2024-11-05 17:01:45.382341] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:56.519 17:01:45 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:56.519 17:01:45 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:56.519 17:01:45 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.777 17:01:45 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:56.777 17:01:45 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:56.777 17:01:45 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:56.777 17:01:45 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:57.036 [2024-11-05 17:01:45.705044] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:57.036 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:57.036 Zero copy mechanism will not be used. 00:22:57.036 Running I/O for 60 seconds... 00:22:57.036 [2024-11-05 17:01:45.784953] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:57.036 [2024-11-05 17:01:45.791517] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:22:57.036 17:01:45 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:57.036 17:01:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:57.036 17:01:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:57.036 17:01:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:57.036 17:01:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:57.036 17:01:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:57.036 17:01:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:57.036 17:01:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:57.036 17:01:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:57.036 17:01:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:57.036 17:01:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.036 17:01:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.295 17:01:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:57.295 "name": "raid_bdev1", 00:22:57.295 "uuid": "78318fa5-79a8-4dd3-a2ea-1e80f3ba4b0e", 00:22:57.295 "strip_size_kb": 0, 00:22:57.295 "state": "online", 00:22:57.295 "raid_level": "raid1", 00:22:57.295 "superblock": true, 00:22:57.295 "num_base_bdevs": 4, 00:22:57.295 "num_base_bdevs_discovered": 3, 00:22:57.295 "num_base_bdevs_operational": 3, 00:22:57.295 "base_bdevs_list": [ 00:22:57.295 { 00:22:57.295 "name": null, 00:22:57.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.295 "is_configured": false, 00:22:57.295 "data_offset": 2048, 00:22:57.295 "data_size": 63488 00:22:57.295 }, 00:22:57.295 { 00:22:57.295 "name": "BaseBdev2", 00:22:57.295 "uuid": "6af47b06-7057-5464-8d43-9cfac17ad701", 00:22:57.295 "is_configured": true, 00:22:57.295 "data_offset": 2048, 00:22:57.295 "data_size": 63488 00:22:57.295 }, 00:22:57.295 { 00:22:57.295 "name": "BaseBdev3", 00:22:57.295 "uuid": "51b08255-25f5-5a53-a4ca-050ceaeeb65b", 00:22:57.295 "is_configured": true, 00:22:57.295 "data_offset": 2048, 00:22:57.295 "data_size": 63488 00:22:57.295 }, 00:22:57.295 { 00:22:57.295 "name": "BaseBdev4", 00:22:57.295 "uuid": "a601c332-baa8-522d-8b2a-7a391a3d5aff", 00:22:57.295 "is_configured": true, 00:22:57.295 "data_offset": 2048, 00:22:57.295 "data_size": 63488 00:22:57.295 } 00:22:57.295 ] 00:22:57.295 }' 00:22:57.295 17:01:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:57.295 17:01:46 -- common/autotest_common.sh@10 -- # set +x 00:22:57.863 17:01:46 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:58.122 [2024-11-05 17:01:46.851916] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:58.122 [2024-11-05 17:01:46.852269] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:58.122 17:01:46 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:58.122 [2024-11-05 17:01:46.902539] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:58.122 [2024-11-05 17:01:46.904762] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:58.122 [2024-11-05 17:01:47.006565] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:58.122 [2024-11-05 17:01:47.007178] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:58.381 [2024-11-05 17:01:47.210877] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:58.381 [2024-11-05 17:01:47.211758] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:58.949 [2024-11-05 17:01:47.545876] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:58.949 [2024-11-05 17:01:47.667948] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:58.949 [2024-11-05 17:01:47.668717] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:59.208 17:01:47 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:59.208 17:01:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:59.208 17:01:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:59.208 17:01:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:59.208 17:01:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:59.208 17:01:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.208 17:01:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.208 [2024-11-05 17:01:47.988118] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:59.208 [2024-11-05 17:01:48.098144] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:59.467 17:01:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:59.467 "name": "raid_bdev1", 00:22:59.467 "uuid": "78318fa5-79a8-4dd3-a2ea-1e80f3ba4b0e", 00:22:59.467 "strip_size_kb": 0, 00:22:59.467 "state": "online", 00:22:59.467 "raid_level": "raid1", 00:22:59.467 "superblock": true, 00:22:59.467 "num_base_bdevs": 4, 00:22:59.467 "num_base_bdevs_discovered": 4, 00:22:59.467 "num_base_bdevs_operational": 4, 00:22:59.467 "process": { 00:22:59.467 "type": "rebuild", 00:22:59.467 "target": "spare", 00:22:59.467 "progress": { 00:22:59.467 "blocks": 16384, 00:22:59.467 "percent": 25 00:22:59.467 } 00:22:59.467 }, 00:22:59.467 "base_bdevs_list": [ 00:22:59.467 { 00:22:59.467 "name": "spare", 00:22:59.467 "uuid": "79476540-9108-5582-8309-3210d5e26b04", 00:22:59.467 "is_configured": true, 00:22:59.467 "data_offset": 2048, 00:22:59.467 "data_size": 63488 00:22:59.467 }, 00:22:59.467 { 00:22:59.467 "name": "BaseBdev2", 00:22:59.467 "uuid": "6af47b06-7057-5464-8d43-9cfac17ad701", 00:22:59.467 "is_configured": true, 00:22:59.467 "data_offset": 2048, 00:22:59.467 "data_size": 63488 00:22:59.467 }, 00:22:59.467 { 00:22:59.467 "name": "BaseBdev3", 00:22:59.467 "uuid": "51b08255-25f5-5a53-a4ca-050ceaeeb65b", 00:22:59.467 "is_configured": true, 00:22:59.467 "data_offset": 2048, 00:22:59.467 "data_size": 63488 00:22:59.467 }, 00:22:59.467 { 00:22:59.467 "name": "BaseBdev4", 00:22:59.467 "uuid": "a601c332-baa8-522d-8b2a-7a391a3d5aff", 00:22:59.467 "is_configured": true, 00:22:59.467 "data_offset": 2048, 00:22:59.467 "data_size": 63488 00:22:59.467 } 00:22:59.467 ] 00:22:59.467 }' 00:22:59.467 17:01:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:59.467 17:01:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:59.467 17:01:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:59.467 17:01:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:59.467 17:01:48 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:59.467 [2024-11-05 17:01:48.324596] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:59.725 [2024-11-05 17:01:48.486251] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:59.984 [2024-11-05 17:01:48.648212] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:59.984 [2024-11-05 17:01:48.658232] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.984 [2024-11-05 17:01:48.682288] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:22:59.984 17:01:48 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:59.984 17:01:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:59.984 17:01:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:59.984 17:01:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:59.984 17:01:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:59.984 17:01:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:59.984 17:01:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:59.984 17:01:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:59.984 17:01:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:59.984 17:01:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:59.984 17:01:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.984 17:01:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.242 17:01:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:00.242 "name": "raid_bdev1", 00:23:00.242 "uuid": "78318fa5-79a8-4dd3-a2ea-1e80f3ba4b0e", 00:23:00.242 "strip_size_kb": 0, 00:23:00.242 "state": "online", 00:23:00.242 "raid_level": "raid1", 00:23:00.242 "superblock": true, 00:23:00.242 "num_base_bdevs": 4, 00:23:00.242 "num_base_bdevs_discovered": 3, 00:23:00.242 "num_base_bdevs_operational": 3, 00:23:00.242 "base_bdevs_list": [ 00:23:00.242 { 00:23:00.242 "name": null, 00:23:00.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.242 "is_configured": false, 00:23:00.242 "data_offset": 2048, 00:23:00.242 "data_size": 63488 00:23:00.242 }, 00:23:00.242 { 00:23:00.242 "name": "BaseBdev2", 00:23:00.242 "uuid": "6af47b06-7057-5464-8d43-9cfac17ad701", 00:23:00.242 "is_configured": true, 00:23:00.242 "data_offset": 2048, 00:23:00.242 "data_size": 63488 00:23:00.243 }, 00:23:00.243 { 00:23:00.243 "name": "BaseBdev3", 00:23:00.243 "uuid": "51b08255-25f5-5a53-a4ca-050ceaeeb65b", 00:23:00.243 "is_configured": true, 00:23:00.243 "data_offset": 2048, 00:23:00.243 "data_size": 63488 00:23:00.243 }, 00:23:00.243 { 00:23:00.243 "name": "BaseBdev4", 00:23:00.243 "uuid": "a601c332-baa8-522d-8b2a-7a391a3d5aff", 00:23:00.243 "is_configured": true, 00:23:00.243 "data_offset": 2048, 00:23:00.243 "data_size": 63488 00:23:00.243 } 00:23:00.243 ] 00:23:00.243 }' 00:23:00.243 17:01:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:00.243 17:01:48 -- common/autotest_common.sh@10 -- # set +x 00:23:00.808 17:01:49 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:00.808 17:01:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:00.808 17:01:49 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:00.808 17:01:49 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:00.808 17:01:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:00.808 17:01:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.808 17:01:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.068 17:01:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:01.068 "name": "raid_bdev1", 00:23:01.068 "uuid": "78318fa5-79a8-4dd3-a2ea-1e80f3ba4b0e", 00:23:01.068 "strip_size_kb": 0, 00:23:01.068 "state": "online", 00:23:01.068 "raid_level": "raid1", 00:23:01.068 "superblock": true, 00:23:01.068 "num_base_bdevs": 4, 00:23:01.068 "num_base_bdevs_discovered": 3, 00:23:01.068 "num_base_bdevs_operational": 3, 00:23:01.068 "base_bdevs_list": [ 00:23:01.068 { 00:23:01.068 "name": null, 00:23:01.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.068 "is_configured": false, 00:23:01.068 "data_offset": 2048, 00:23:01.068 "data_size": 63488 00:23:01.068 }, 00:23:01.068 { 00:23:01.068 "name": "BaseBdev2", 00:23:01.068 "uuid": "6af47b06-7057-5464-8d43-9cfac17ad701", 00:23:01.068 "is_configured": true, 00:23:01.068 "data_offset": 2048, 00:23:01.068 "data_size": 63488 00:23:01.068 }, 00:23:01.068 { 00:23:01.068 "name": "BaseBdev3", 00:23:01.068 "uuid": "51b08255-25f5-5a53-a4ca-050ceaeeb65b", 00:23:01.068 "is_configured": true, 00:23:01.068 "data_offset": 2048, 00:23:01.068 "data_size": 63488 00:23:01.068 }, 00:23:01.068 { 00:23:01.068 "name": "BaseBdev4", 00:23:01.068 "uuid": "a601c332-baa8-522d-8b2a-7a391a3d5aff", 00:23:01.068 "is_configured": true, 00:23:01.068 "data_offset": 2048, 00:23:01.068 "data_size": 63488 00:23:01.068 } 00:23:01.068 ] 00:23:01.068 }' 00:23:01.068 17:01:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:01.068 17:01:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:01.068 17:01:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:01.326 17:01:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:01.326 17:01:49 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:01.584 [2024-11-05 17:01:50.227580] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:01.584 [2024-11-05 17:01:50.227936] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:01.584 17:01:50 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:01.584 [2024-11-05 17:01:50.276172] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:01.584 [2024-11-05 17:01:50.278318] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:01.584 [2024-11-05 17:01:50.400541] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:01.584 [2024-11-05 17:01:50.401305] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:01.842 [2024-11-05 17:01:50.539172] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:01.842 [2024-11-05 17:01:50.539847] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:02.100 [2024-11-05 17:01:50.965413] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:02.360 [2024-11-05 17:01:51.213567] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:02.629 17:01:51 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:02.630 17:01:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:02.630 17:01:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:02.630 17:01:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:02.630 17:01:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:02.630 17:01:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.630 17:01:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.630 [2024-11-05 17:01:51.444699] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:02.630 [2024-11-05 17:01:51.445085] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:02.630 17:01:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:02.630 "name": "raid_bdev1", 00:23:02.630 "uuid": "78318fa5-79a8-4dd3-a2ea-1e80f3ba4b0e", 00:23:02.630 "strip_size_kb": 0, 00:23:02.630 "state": "online", 00:23:02.630 "raid_level": "raid1", 00:23:02.630 "superblock": true, 00:23:02.630 "num_base_bdevs": 4, 00:23:02.630 "num_base_bdevs_discovered": 4, 00:23:02.630 "num_base_bdevs_operational": 4, 00:23:02.630 "process": { 00:23:02.630 "type": "rebuild", 00:23:02.630 "target": "spare", 00:23:02.630 "progress": { 00:23:02.630 "blocks": 16384, 00:23:02.630 "percent": 25 00:23:02.630 } 00:23:02.630 }, 00:23:02.630 "base_bdevs_list": [ 00:23:02.630 { 00:23:02.630 "name": "spare", 00:23:02.630 "uuid": "79476540-9108-5582-8309-3210d5e26b04", 00:23:02.630 "is_configured": true, 00:23:02.630 "data_offset": 2048, 00:23:02.630 "data_size": 63488 00:23:02.630 }, 00:23:02.630 { 00:23:02.630 "name": "BaseBdev2", 00:23:02.630 "uuid": "6af47b06-7057-5464-8d43-9cfac17ad701", 00:23:02.630 "is_configured": true, 00:23:02.630 "data_offset": 2048, 00:23:02.630 "data_size": 63488 00:23:02.630 }, 00:23:02.630 { 00:23:02.630 "name": "BaseBdev3", 00:23:02.630 "uuid": "51b08255-25f5-5a53-a4ca-050ceaeeb65b", 00:23:02.630 "is_configured": true, 00:23:02.630 "data_offset": 2048, 00:23:02.630 "data_size": 63488 00:23:02.630 }, 00:23:02.630 { 00:23:02.630 "name": "BaseBdev4", 00:23:02.630 "uuid": "a601c332-baa8-522d-8b2a-7a391a3d5aff", 00:23:02.630 "is_configured": true, 00:23:02.630 "data_offset": 2048, 00:23:02.630 "data_size": 63488 00:23:02.630 } 00:23:02.630 ] 00:23:02.630 }' 00:23:02.630 17:01:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:02.897 17:01:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:02.897 17:01:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:02.897 17:01:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:02.897 17:01:51 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:02.897 17:01:51 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:02.897 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:02.897 17:01:51 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:23:02.897 17:01:51 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:02.897 17:01:51 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:23:02.897 17:01:51 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:02.897 [2024-11-05 17:01:51.775070] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:03.155 [2024-11-05 17:01:51.811469] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:03.155 [2024-11-05 17:01:51.999196] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:03.155 [2024-11-05 17:01:51.999584] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:03.414 [2024-11-05 17:01:52.101922] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005d40 00:23:03.414 [2024-11-05 17:01:52.102078] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005fb0 00:23:03.414 17:01:52 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:23:03.414 17:01:52 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:23:03.414 17:01:52 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:03.414 17:01:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:03.414 17:01:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:03.414 17:01:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:03.414 17:01:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:03.414 17:01:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.414 17:01:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.672 [2024-11-05 17:01:52.474123] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:23:03.672 17:01:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:03.672 "name": "raid_bdev1", 00:23:03.672 "uuid": "78318fa5-79a8-4dd3-a2ea-1e80f3ba4b0e", 00:23:03.672 "strip_size_kb": 0, 00:23:03.672 "state": "online", 00:23:03.672 "raid_level": "raid1", 00:23:03.672 "superblock": true, 00:23:03.672 "num_base_bdevs": 4, 00:23:03.672 "num_base_bdevs_discovered": 3, 00:23:03.672 "num_base_bdevs_operational": 3, 00:23:03.672 "process": { 00:23:03.672 "type": "rebuild", 00:23:03.672 "target": "spare", 00:23:03.672 "progress": { 00:23:03.672 "blocks": 26624, 00:23:03.672 "percent": 41 00:23:03.672 } 00:23:03.672 }, 00:23:03.672 "base_bdevs_list": [ 00:23:03.672 { 00:23:03.672 "name": "spare", 00:23:03.672 "uuid": "79476540-9108-5582-8309-3210d5e26b04", 00:23:03.672 "is_configured": true, 00:23:03.672 "data_offset": 2048, 00:23:03.672 "data_size": 63488 00:23:03.672 }, 00:23:03.672 { 00:23:03.672 "name": null, 00:23:03.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.672 "is_configured": false, 00:23:03.672 "data_offset": 2048, 00:23:03.672 "data_size": 63488 00:23:03.672 }, 00:23:03.672 { 00:23:03.672 "name": "BaseBdev3", 00:23:03.672 "uuid": "51b08255-25f5-5a53-a4ca-050ceaeeb65b", 00:23:03.672 "is_configured": true, 00:23:03.672 "data_offset": 2048, 00:23:03.672 "data_size": 63488 00:23:03.672 }, 00:23:03.672 { 00:23:03.672 "name": "BaseBdev4", 00:23:03.672 "uuid": "a601c332-baa8-522d-8b2a-7a391a3d5aff", 00:23:03.672 "is_configured": true, 00:23:03.672 "data_offset": 2048, 00:23:03.672 "data_size": 63488 00:23:03.672 } 00:23:03.672 ] 00:23:03.672 }' 00:23:03.672 17:01:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:03.672 17:01:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:03.672 17:01:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:03.930 17:01:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:03.930 17:01:52 -- bdev/bdev_raid.sh@657 -- # local timeout=552 00:23:03.930 17:01:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:03.930 17:01:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:03.930 17:01:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:03.930 17:01:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:03.930 17:01:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:03.930 17:01:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:03.930 17:01:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.930 17:01:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.930 [2024-11-05 17:01:52.697287] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:03.930 17:01:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:03.930 "name": "raid_bdev1", 00:23:03.930 "uuid": "78318fa5-79a8-4dd3-a2ea-1e80f3ba4b0e", 00:23:03.930 "strip_size_kb": 0, 00:23:03.930 "state": "online", 00:23:03.930 "raid_level": "raid1", 00:23:03.930 "superblock": true, 00:23:03.930 "num_base_bdevs": 4, 00:23:03.930 "num_base_bdevs_discovered": 3, 00:23:03.930 "num_base_bdevs_operational": 3, 00:23:03.930 "process": { 00:23:03.930 "type": "rebuild", 00:23:03.930 "target": "spare", 00:23:03.930 "progress": { 00:23:03.930 "blocks": 32768, 00:23:03.930 "percent": 51 00:23:03.930 } 00:23:03.930 }, 00:23:03.930 "base_bdevs_list": [ 00:23:03.930 { 00:23:03.930 "name": "spare", 00:23:03.930 "uuid": "79476540-9108-5582-8309-3210d5e26b04", 00:23:03.930 "is_configured": true, 00:23:03.930 "data_offset": 2048, 00:23:03.930 "data_size": 63488 00:23:03.930 }, 00:23:03.930 { 00:23:03.930 "name": null, 00:23:03.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.930 "is_configured": false, 00:23:03.930 "data_offset": 2048, 00:23:03.930 "data_size": 63488 00:23:03.930 }, 00:23:03.930 { 00:23:03.930 "name": "BaseBdev3", 00:23:03.930 "uuid": "51b08255-25f5-5a53-a4ca-050ceaeeb65b", 00:23:03.930 "is_configured": true, 00:23:03.931 "data_offset": 2048, 00:23:03.931 "data_size": 63488 00:23:03.931 }, 00:23:03.931 { 00:23:03.931 "name": "BaseBdev4", 00:23:03.931 "uuid": "a601c332-baa8-522d-8b2a-7a391a3d5aff", 00:23:03.931 "is_configured": true, 00:23:03.931 "data_offset": 2048, 00:23:03.931 "data_size": 63488 00:23:03.931 } 00:23:03.931 ] 00:23:03.931 }' 00:23:03.931 17:01:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:04.188 17:01:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:04.188 17:01:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:04.188 17:01:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:04.188 17:01:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:04.188 [2024-11-05 17:01:52.921155] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:04.446 [2024-11-05 17:01:53.169888] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:23:04.446 [2024-11-05 17:01:53.285591] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:23:05.013 [2024-11-05 17:01:53.631034] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:23:05.013 [2024-11-05 17:01:53.855051] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:23:05.013 17:01:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:05.013 17:01:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:05.013 17:01:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:05.013 17:01:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:05.013 17:01:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:05.013 17:01:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:05.013 17:01:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.013 17:01:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.271 17:01:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:05.271 "name": "raid_bdev1", 00:23:05.271 "uuid": "78318fa5-79a8-4dd3-a2ea-1e80f3ba4b0e", 00:23:05.271 "strip_size_kb": 0, 00:23:05.271 "state": "online", 00:23:05.271 "raid_level": "raid1", 00:23:05.271 "superblock": true, 00:23:05.271 "num_base_bdevs": 4, 00:23:05.271 "num_base_bdevs_discovered": 3, 00:23:05.271 "num_base_bdevs_operational": 3, 00:23:05.271 "process": { 00:23:05.271 "type": "rebuild", 00:23:05.271 "target": "spare", 00:23:05.271 "progress": { 00:23:05.271 "blocks": 53248, 00:23:05.271 "percent": 83 00:23:05.271 } 00:23:05.271 }, 00:23:05.271 "base_bdevs_list": [ 00:23:05.271 { 00:23:05.271 "name": "spare", 00:23:05.271 "uuid": "79476540-9108-5582-8309-3210d5e26b04", 00:23:05.271 "is_configured": true, 00:23:05.271 "data_offset": 2048, 00:23:05.271 "data_size": 63488 00:23:05.271 }, 00:23:05.271 { 00:23:05.271 "name": null, 00:23:05.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.271 "is_configured": false, 00:23:05.271 "data_offset": 2048, 00:23:05.271 "data_size": 63488 00:23:05.271 }, 00:23:05.271 { 00:23:05.271 "name": "BaseBdev3", 00:23:05.271 "uuid": "51b08255-25f5-5a53-a4ca-050ceaeeb65b", 00:23:05.271 "is_configured": true, 00:23:05.271 "data_offset": 2048, 00:23:05.271 "data_size": 63488 00:23:05.271 }, 00:23:05.271 { 00:23:05.271 "name": "BaseBdev4", 00:23:05.271 "uuid": "a601c332-baa8-522d-8b2a-7a391a3d5aff", 00:23:05.271 "is_configured": true, 00:23:05.271 "data_offset": 2048, 00:23:05.271 "data_size": 63488 00:23:05.271 } 00:23:05.271 ] 00:23:05.271 }' 00:23:05.271 17:01:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:05.271 17:01:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:05.271 17:01:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:05.529 17:01:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:05.529 17:01:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:05.529 [2024-11-05 17:01:54.292920] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:23:05.787 [2024-11-05 17:01:54.617643] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:06.045 [2024-11-05 17:01:54.723341] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:06.045 [2024-11-05 17:01:54.725927] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:06.304 17:01:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:06.304 17:01:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:06.304 17:01:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:06.304 17:01:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:06.304 17:01:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:06.304 17:01:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:06.304 17:01:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.304 17:01:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.562 17:01:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:06.562 "name": "raid_bdev1", 00:23:06.562 "uuid": "78318fa5-79a8-4dd3-a2ea-1e80f3ba4b0e", 00:23:06.562 "strip_size_kb": 0, 00:23:06.562 "state": "online", 00:23:06.562 "raid_level": "raid1", 00:23:06.562 "superblock": true, 00:23:06.562 "num_base_bdevs": 4, 00:23:06.562 "num_base_bdevs_discovered": 3, 00:23:06.562 "num_base_bdevs_operational": 3, 00:23:06.562 "base_bdevs_list": [ 00:23:06.562 { 00:23:06.562 "name": "spare", 00:23:06.562 "uuid": "79476540-9108-5582-8309-3210d5e26b04", 00:23:06.562 "is_configured": true, 00:23:06.562 "data_offset": 2048, 00:23:06.562 "data_size": 63488 00:23:06.562 }, 00:23:06.562 { 00:23:06.562 "name": null, 00:23:06.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.562 "is_configured": false, 00:23:06.562 "data_offset": 2048, 00:23:06.562 "data_size": 63488 00:23:06.562 }, 00:23:06.562 { 00:23:06.562 "name": "BaseBdev3", 00:23:06.562 "uuid": "51b08255-25f5-5a53-a4ca-050ceaeeb65b", 00:23:06.562 "is_configured": true, 00:23:06.562 "data_offset": 2048, 00:23:06.562 "data_size": 63488 00:23:06.562 }, 00:23:06.562 { 00:23:06.562 "name": "BaseBdev4", 00:23:06.562 "uuid": "a601c332-baa8-522d-8b2a-7a391a3d5aff", 00:23:06.562 "is_configured": true, 00:23:06.562 "data_offset": 2048, 00:23:06.562 "data_size": 63488 00:23:06.562 } 00:23:06.562 ] 00:23:06.562 }' 00:23:06.562 17:01:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:06.821 17:01:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:06.821 17:01:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:06.821 17:01:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:06.821 17:01:55 -- bdev/bdev_raid.sh@660 -- # break 00:23:06.821 17:01:55 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:06.821 17:01:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:06.821 17:01:55 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:06.821 17:01:55 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:06.821 17:01:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:06.821 17:01:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.821 17:01:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.078 17:01:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:07.078 "name": "raid_bdev1", 00:23:07.078 "uuid": "78318fa5-79a8-4dd3-a2ea-1e80f3ba4b0e", 00:23:07.078 "strip_size_kb": 0, 00:23:07.078 "state": "online", 00:23:07.078 "raid_level": "raid1", 00:23:07.078 "superblock": true, 00:23:07.078 "num_base_bdevs": 4, 00:23:07.078 "num_base_bdevs_discovered": 3, 00:23:07.078 "num_base_bdevs_operational": 3, 00:23:07.078 "base_bdevs_list": [ 00:23:07.078 { 00:23:07.078 "name": "spare", 00:23:07.078 "uuid": "79476540-9108-5582-8309-3210d5e26b04", 00:23:07.078 "is_configured": true, 00:23:07.078 "data_offset": 2048, 00:23:07.078 "data_size": 63488 00:23:07.078 }, 00:23:07.078 { 00:23:07.079 "name": null, 00:23:07.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.079 "is_configured": false, 00:23:07.079 "data_offset": 2048, 00:23:07.079 "data_size": 63488 00:23:07.079 }, 00:23:07.079 { 00:23:07.079 "name": "BaseBdev3", 00:23:07.079 "uuid": "51b08255-25f5-5a53-a4ca-050ceaeeb65b", 00:23:07.079 "is_configured": true, 00:23:07.079 "data_offset": 2048, 00:23:07.079 "data_size": 63488 00:23:07.079 }, 00:23:07.079 { 00:23:07.079 "name": "BaseBdev4", 00:23:07.079 "uuid": "a601c332-baa8-522d-8b2a-7a391a3d5aff", 00:23:07.079 "is_configured": true, 00:23:07.079 "data_offset": 2048, 00:23:07.079 "data_size": 63488 00:23:07.079 } 00:23:07.079 ] 00:23:07.079 }' 00:23:07.079 17:01:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:07.079 17:01:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:07.079 17:01:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:07.079 17:01:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:07.079 17:01:55 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:07.079 17:01:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:07.079 17:01:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:07.079 17:01:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:07.079 17:01:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:07.079 17:01:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:07.079 17:01:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:07.079 17:01:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:07.079 17:01:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:07.079 17:01:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:07.079 17:01:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.079 17:01:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.337 17:01:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:07.337 "name": "raid_bdev1", 00:23:07.337 "uuid": "78318fa5-79a8-4dd3-a2ea-1e80f3ba4b0e", 00:23:07.337 "strip_size_kb": 0, 00:23:07.337 "state": "online", 00:23:07.337 "raid_level": "raid1", 00:23:07.337 "superblock": true, 00:23:07.337 "num_base_bdevs": 4, 00:23:07.337 "num_base_bdevs_discovered": 3, 00:23:07.337 "num_base_bdevs_operational": 3, 00:23:07.337 "base_bdevs_list": [ 00:23:07.337 { 00:23:07.337 "name": "spare", 00:23:07.337 "uuid": "79476540-9108-5582-8309-3210d5e26b04", 00:23:07.337 "is_configured": true, 00:23:07.337 "data_offset": 2048, 00:23:07.337 "data_size": 63488 00:23:07.337 }, 00:23:07.337 { 00:23:07.337 "name": null, 00:23:07.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.337 "is_configured": false, 00:23:07.337 "data_offset": 2048, 00:23:07.337 "data_size": 63488 00:23:07.337 }, 00:23:07.337 { 00:23:07.337 "name": "BaseBdev3", 00:23:07.337 "uuid": "51b08255-25f5-5a53-a4ca-050ceaeeb65b", 00:23:07.337 "is_configured": true, 00:23:07.337 "data_offset": 2048, 00:23:07.337 "data_size": 63488 00:23:07.337 }, 00:23:07.337 { 00:23:07.337 "name": "BaseBdev4", 00:23:07.337 "uuid": "a601c332-baa8-522d-8b2a-7a391a3d5aff", 00:23:07.337 "is_configured": true, 00:23:07.337 "data_offset": 2048, 00:23:07.337 "data_size": 63488 00:23:07.337 } 00:23:07.337 ] 00:23:07.337 }' 00:23:07.337 17:01:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:07.337 17:01:56 -- common/autotest_common.sh@10 -- # set +x 00:23:07.904 17:01:56 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:08.162 [2024-11-05 17:01:56.947514] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:08.162 [2024-11-05 17:01:56.947742] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:08.162 00:23:08.162 Latency(us) 00:23:08.162 [2024-11-05T17:01:57.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.162 [2024-11-05T17:01:57.039Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:08.162 raid_bdev1 : 11.28 108.91 326.74 0.00 0.00 12805.79 294.17 116773.24 00:23:08.162 [2024-11-05T17:01:57.039Z] =================================================================================================================== 00:23:08.162 [2024-11-05T17:01:57.039Z] Total : 108.91 326.74 0.00 0.00 12805.79 294.17 116773.24 00:23:08.162 [2024-11-05 17:01:57.006206] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:08.162 [2024-11-05 17:01:57.006371] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:08.162 [2024-11-05 17:01:57.006507] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:08.162 0 00:23:08.162 [2024-11-05 17:01:57.006761] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:23:08.162 17:01:57 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.162 17:01:57 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:08.729 17:01:57 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:08.729 17:01:57 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:23:08.729 17:01:57 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:08.729 17:01:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:08.729 17:01:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:08.729 17:01:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:08.729 17:01:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:08.729 17:01:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:08.729 17:01:57 -- bdev/nbd_common.sh@12 -- # local i 00:23:08.729 17:01:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:08.729 17:01:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:08.729 17:01:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:08.729 /dev/nbd0 00:23:08.729 17:01:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:08.729 17:01:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:08.729 17:01:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:08.729 17:01:57 -- common/autotest_common.sh@867 -- # local i 00:23:08.729 17:01:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:08.729 17:01:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:08.729 17:01:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:09.003 17:01:57 -- common/autotest_common.sh@871 -- # break 00:23:09.004 17:01:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:09.004 17:01:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:09.004 17:01:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.004 1+0 records in 00:23:09.004 1+0 records out 00:23:09.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000609511 s, 6.7 MB/s 00:23:09.004 17:01:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.004 17:01:57 -- common/autotest_common.sh@884 -- # size=4096 00:23:09.004 17:01:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.004 17:01:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:09.004 17:01:57 -- common/autotest_common.sh@887 -- # return 0 00:23:09.004 17:01:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.004 17:01:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:09.004 17:01:57 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:09.004 17:01:57 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:23:09.004 17:01:57 -- bdev/bdev_raid.sh@678 -- # continue 00:23:09.004 17:01:57 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:09.004 17:01:57 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:23:09.004 17:01:57 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:23:09.004 17:01:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:09.004 17:01:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:23:09.004 17:01:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:09.004 17:01:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:09.004 17:01:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:09.004 17:01:57 -- bdev/nbd_common.sh@12 -- # local i 00:23:09.004 17:01:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:09.004 17:01:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:09.004 17:01:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:23:09.263 /dev/nbd1 00:23:09.263 17:01:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:09.263 17:01:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:09.263 17:01:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:09.263 17:01:57 -- common/autotest_common.sh@867 -- # local i 00:23:09.263 17:01:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:09.263 17:01:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:09.263 17:01:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:09.263 17:01:57 -- common/autotest_common.sh@871 -- # break 00:23:09.263 17:01:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:09.263 17:01:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:09.263 17:01:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.263 1+0 records in 00:23:09.263 1+0 records out 00:23:09.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445776 s, 9.2 MB/s 00:23:09.263 17:01:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.263 17:01:57 -- common/autotest_common.sh@884 -- # size=4096 00:23:09.263 17:01:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.263 17:01:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:09.263 17:01:57 -- common/autotest_common.sh@887 -- # return 0 00:23:09.263 17:01:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.263 17:01:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:09.263 17:01:57 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:09.263 17:01:58 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:09.263 17:01:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:09.263 17:01:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:09.263 17:01:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:09.263 17:01:58 -- bdev/nbd_common.sh@51 -- # local i 00:23:09.263 17:01:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:09.263 17:01:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:09.522 17:01:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:09.522 17:01:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:09.522 17:01:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:09.522 17:01:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:09.522 17:01:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:09.522 17:01:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:09.522 17:01:58 -- bdev/nbd_common.sh@41 -- # break 00:23:09.522 17:01:58 -- bdev/nbd_common.sh@45 -- # return 0 00:23:09.522 17:01:58 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:09.522 17:01:58 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:23:09.522 17:01:58 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:23:09.522 17:01:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:09.522 17:01:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:23:09.522 17:01:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:09.522 17:01:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:09.522 17:01:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:09.522 17:01:58 -- bdev/nbd_common.sh@12 -- # local i 00:23:09.522 17:01:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:09.522 17:01:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:09.522 17:01:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:23:09.780 /dev/nbd1 00:23:09.780 17:01:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:09.780 17:01:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:09.780 17:01:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:09.780 17:01:58 -- common/autotest_common.sh@867 -- # local i 00:23:09.780 17:01:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:09.780 17:01:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:09.780 17:01:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:09.780 17:01:58 -- common/autotest_common.sh@871 -- # break 00:23:09.780 17:01:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:09.780 17:01:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:09.780 17:01:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.780 1+0 records in 00:23:09.780 1+0 records out 00:23:09.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591407 s, 6.9 MB/s 00:23:09.780 17:01:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.780 17:01:58 -- common/autotest_common.sh@884 -- # size=4096 00:23:09.780 17:01:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.780 17:01:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:09.780 17:01:58 -- common/autotest_common.sh@887 -- # return 0 00:23:09.780 17:01:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.780 17:01:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:09.780 17:01:58 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:10.038 17:01:58 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:10.038 17:01:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:10.038 17:01:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:10.038 17:01:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:10.038 17:01:58 -- bdev/nbd_common.sh@51 -- # local i 00:23:10.038 17:01:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:10.038 17:01:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:10.296 17:01:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:10.296 17:01:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:10.296 17:01:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:10.296 17:01:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:10.296 17:01:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:10.296 17:01:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:10.296 17:01:58 -- bdev/nbd_common.sh@41 -- # break 00:23:10.296 17:01:58 -- bdev/nbd_common.sh@45 -- # return 0 00:23:10.296 17:01:58 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:10.296 17:01:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:10.296 17:01:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:10.296 17:01:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:10.296 17:01:58 -- bdev/nbd_common.sh@51 -- # local i 00:23:10.296 17:01:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:10.296 17:01:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:10.555 17:01:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:10.555 17:01:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:10.555 17:01:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:10.555 17:01:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:10.555 17:01:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:10.555 17:01:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:10.555 17:01:59 -- bdev/nbd_common.sh@41 -- # break 00:23:10.555 17:01:59 -- bdev/nbd_common.sh@45 -- # return 0 00:23:10.555 17:01:59 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:23:10.555 17:01:59 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:10.555 17:01:59 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:23:10.555 17:01:59 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:10.813 17:01:59 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:10.813 [2024-11-05 17:01:59.668465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:10.813 [2024-11-05 17:01:59.668712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.813 [2024-11-05 17:01:59.668792] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:23:10.813 [2024-11-05 17:01:59.669040] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.813 [2024-11-05 17:01:59.671274] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.813 [2024-11-05 17:01:59.671468] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:10.813 [2024-11-05 17:01:59.671677] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:10.813 [2024-11-05 17:01:59.671839] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:10.813 BaseBdev1 00:23:10.813 17:01:59 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:10.813 17:01:59 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:23:10.813 17:01:59 -- bdev/bdev_raid.sh@696 -- # continue 00:23:10.813 17:01:59 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:10.813 17:01:59 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:23:10.813 17:01:59 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:23:11.071 17:01:59 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:11.329 [2024-11-05 17:02:00.053481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:11.329 [2024-11-05 17:02:00.053685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:11.329 [2024-11-05 17:02:00.053821] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:23:11.329 [2024-11-05 17:02:00.053950] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:11.329 [2024-11-05 17:02:00.054495] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:11.329 [2024-11-05 17:02:00.054669] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:11.329 [2024-11-05 17:02:00.054913] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:23:11.329 [2024-11-05 17:02:00.055014] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:23:11.329 [2024-11-05 17:02:00.055098] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:11.329 [2024-11-05 17:02:00.055196] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:23:11.329 [2024-11-05 17:02:00.055355] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:11.329 BaseBdev3 00:23:11.329 17:02:00 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:11.329 17:02:00 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:23:11.329 17:02:00 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:23:11.587 17:02:00 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:11.845 [2024-11-05 17:02:00.521586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:11.845 [2024-11-05 17:02:00.521791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:11.845 [2024-11-05 17:02:00.521864] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:23:11.845 [2024-11-05 17:02:00.522113] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:11.845 [2024-11-05 17:02:00.522539] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:11.845 [2024-11-05 17:02:00.522730] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:11.845 [2024-11-05 17:02:00.522950] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:23:11.845 [2024-11-05 17:02:00.523081] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:11.845 BaseBdev4 00:23:11.845 17:02:00 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:11.845 17:02:00 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:12.104 [2024-11-05 17:02:00.893700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:12.104 [2024-11-05 17:02:00.893895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:12.104 [2024-11-05 17:02:00.893963] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:23:12.104 [2024-11-05 17:02:00.894237] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:12.104 [2024-11-05 17:02:00.894690] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:12.104 [2024-11-05 17:02:00.894884] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:12.104 [2024-11-05 17:02:00.895087] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:23:12.104 [2024-11-05 17:02:00.895212] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:12.104 spare 00:23:12.104 17:02:00 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:12.104 17:02:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:12.104 17:02:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:12.104 17:02:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:12.104 17:02:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:12.104 17:02:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:12.104 17:02:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:12.104 17:02:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:12.104 17:02:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:12.104 17:02:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:12.104 17:02:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.104 17:02:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.104 [2024-11-05 17:02:00.995352] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:23:12.104 [2024-11-05 17:02:00.995485] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:12.104 [2024-11-05 17:02:00.995635] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:23:12.104 [2024-11-05 17:02:00.996145] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:23:12.104 [2024-11-05 17:02:00.996275] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:23:12.104 [2024-11-05 17:02:00.996525] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:12.362 17:02:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:12.362 "name": "raid_bdev1", 00:23:12.362 "uuid": "78318fa5-79a8-4dd3-a2ea-1e80f3ba4b0e", 00:23:12.362 "strip_size_kb": 0, 00:23:12.362 "state": "online", 00:23:12.362 "raid_level": "raid1", 00:23:12.362 "superblock": true, 00:23:12.362 "num_base_bdevs": 4, 00:23:12.362 "num_base_bdevs_discovered": 3, 00:23:12.362 "num_base_bdevs_operational": 3, 00:23:12.362 "base_bdevs_list": [ 00:23:12.362 { 00:23:12.362 "name": "spare", 00:23:12.362 "uuid": "79476540-9108-5582-8309-3210d5e26b04", 00:23:12.362 "is_configured": true, 00:23:12.362 "data_offset": 2048, 00:23:12.362 "data_size": 63488 00:23:12.362 }, 00:23:12.362 { 00:23:12.362 "name": null, 00:23:12.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.362 "is_configured": false, 00:23:12.362 "data_offset": 2048, 00:23:12.362 "data_size": 63488 00:23:12.362 }, 00:23:12.362 { 00:23:12.362 "name": "BaseBdev3", 00:23:12.362 "uuid": "51b08255-25f5-5a53-a4ca-050ceaeeb65b", 00:23:12.362 "is_configured": true, 00:23:12.362 "data_offset": 2048, 00:23:12.362 "data_size": 63488 00:23:12.362 }, 00:23:12.362 { 00:23:12.362 "name": "BaseBdev4", 00:23:12.362 "uuid": "a601c332-baa8-522d-8b2a-7a391a3d5aff", 00:23:12.362 "is_configured": true, 00:23:12.362 "data_offset": 2048, 00:23:12.362 "data_size": 63488 00:23:12.362 } 00:23:12.362 ] 00:23:12.362 }' 00:23:12.362 17:02:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:12.362 17:02:01 -- common/autotest_common.sh@10 -- # set +x 00:23:12.929 17:02:01 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:12.929 17:02:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:12.929 17:02:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:12.929 17:02:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:12.929 17:02:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:12.929 17:02:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.929 17:02:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.187 17:02:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:13.187 "name": "raid_bdev1", 00:23:13.187 "uuid": "78318fa5-79a8-4dd3-a2ea-1e80f3ba4b0e", 00:23:13.187 "strip_size_kb": 0, 00:23:13.187 "state": "online", 00:23:13.187 "raid_level": "raid1", 00:23:13.187 "superblock": true, 00:23:13.187 "num_base_bdevs": 4, 00:23:13.187 "num_base_bdevs_discovered": 3, 00:23:13.187 "num_base_bdevs_operational": 3, 00:23:13.187 "base_bdevs_list": [ 00:23:13.187 { 00:23:13.187 "name": "spare", 00:23:13.187 "uuid": "79476540-9108-5582-8309-3210d5e26b04", 00:23:13.187 "is_configured": true, 00:23:13.187 "data_offset": 2048, 00:23:13.187 "data_size": 63488 00:23:13.187 }, 00:23:13.187 { 00:23:13.187 "name": null, 00:23:13.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.187 "is_configured": false, 00:23:13.187 "data_offset": 2048, 00:23:13.187 "data_size": 63488 00:23:13.187 }, 00:23:13.187 { 00:23:13.187 "name": "BaseBdev3", 00:23:13.187 "uuid": "51b08255-25f5-5a53-a4ca-050ceaeeb65b", 00:23:13.187 "is_configured": true, 00:23:13.187 "data_offset": 2048, 00:23:13.187 "data_size": 63488 00:23:13.187 }, 00:23:13.187 { 00:23:13.187 "name": "BaseBdev4", 00:23:13.187 "uuid": "a601c332-baa8-522d-8b2a-7a391a3d5aff", 00:23:13.187 "is_configured": true, 00:23:13.187 "data_offset": 2048, 00:23:13.187 "data_size": 63488 00:23:13.187 } 00:23:13.187 ] 00:23:13.187 }' 00:23:13.187 17:02:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:13.187 17:02:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:13.187 17:02:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:13.187 17:02:02 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:13.187 17:02:02 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.187 17:02:02 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:13.445 17:02:02 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:23:13.445 17:02:02 -- bdev/bdev_raid.sh@709 -- # killprocess 126371 00:23:13.445 17:02:02 -- common/autotest_common.sh@936 -- # '[' -z 126371 ']' 00:23:13.445 17:02:02 -- common/autotest_common.sh@940 -- # kill -0 126371 00:23:13.445 17:02:02 -- common/autotest_common.sh@941 -- # uname 00:23:13.445 17:02:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:13.445 17:02:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126371 00:23:13.445 killing process with pid 126371 00:23:13.445 Received shutdown signal, test time was about 16.605360 seconds 00:23:13.445 00:23:13.445 Latency(us) 00:23:13.445 [2024-11-05T17:02:02.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.445 [2024-11-05T17:02:02.322Z] =================================================================================================================== 00:23:13.445 [2024-11-05T17:02:02.322Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:13.445 17:02:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:13.445 17:02:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:13.445 17:02:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126371' 00:23:13.445 17:02:02 -- common/autotest_common.sh@955 -- # kill 126371 00:23:13.445 [2024-11-05 17:02:02.312706] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:13.445 [2024-11-05 17:02:02.312769] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:13.445 17:02:02 -- common/autotest_common.sh@960 -- # wait 126371 00:23:13.445 [2024-11-05 17:02:02.312838] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:13.445 [2024-11-05 17:02:02.312850] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:23:13.703 [2024-11-05 17:02:02.588219] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:15.076 ************************************ 00:23:15.076 END TEST raid_rebuild_test_sb_io 00:23:15.076 ************************************ 00:23:15.076 17:02:03 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:15.076 00:23:15.076 real 0m22.793s 00:23:15.076 user 0m36.582s 00:23:15.076 sys 0m2.717s 00:23:15.077 17:02:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:15.077 17:02:03 -- common/autotest_common.sh@10 -- # set +x 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:23:15.077 17:02:03 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:23:15.077 17:02:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:15.077 17:02:03 -- common/autotest_common.sh@10 -- # set +x 00:23:15.077 ************************************ 00:23:15.077 START TEST raid5f_state_function_test 00:23:15.077 ************************************ 00:23:15.077 17:02:03 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 false 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@226 -- # raid_pid=126977 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126977' 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:15.077 Process raid pid: 126977 00:23:15.077 17:02:03 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126977 /var/tmp/spdk-raid.sock 00:23:15.077 17:02:03 -- common/autotest_common.sh@829 -- # '[' -z 126977 ']' 00:23:15.077 17:02:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:15.077 17:02:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:15.077 17:02:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:15.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:15.077 17:02:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:15.077 17:02:03 -- common/autotest_common.sh@10 -- # set +x 00:23:15.077 [2024-11-05 17:02:03.694746] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:15.077 [2024-11-05 17:02:03.695841] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.077 [2024-11-05 17:02:03.865084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.334 [2024-11-05 17:02:04.023643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.334 [2024-11-05 17:02:04.192832] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:15.901 17:02:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.901 17:02:04 -- common/autotest_common.sh@862 -- # return 0 00:23:15.901 17:02:04 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:15.901 [2024-11-05 17:02:04.756657] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:15.901 [2024-11-05 17:02:04.756860] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:15.901 [2024-11-05 17:02:04.756997] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:15.901 [2024-11-05 17:02:04.757058] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:15.901 [2024-11-05 17:02:04.757334] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:15.901 [2024-11-05 17:02:04.757419] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:15.901 17:02:04 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:15.901 17:02:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:15.901 17:02:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:15.901 17:02:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:15.901 17:02:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:15.901 17:02:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:15.901 17:02:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:15.901 17:02:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:15.901 17:02:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:15.901 17:02:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:15.901 17:02:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.901 17:02:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:16.159 17:02:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:16.159 "name": "Existed_Raid", 00:23:16.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.159 "strip_size_kb": 64, 00:23:16.159 "state": "configuring", 00:23:16.159 "raid_level": "raid5f", 00:23:16.159 "superblock": false, 00:23:16.159 "num_base_bdevs": 3, 00:23:16.159 "num_base_bdevs_discovered": 0, 00:23:16.159 "num_base_bdevs_operational": 3, 00:23:16.159 "base_bdevs_list": [ 00:23:16.159 { 00:23:16.159 "name": "BaseBdev1", 00:23:16.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.159 "is_configured": false, 00:23:16.159 "data_offset": 0, 00:23:16.159 "data_size": 0 00:23:16.159 }, 00:23:16.159 { 00:23:16.159 "name": "BaseBdev2", 00:23:16.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.159 "is_configured": false, 00:23:16.159 "data_offset": 0, 00:23:16.159 "data_size": 0 00:23:16.159 }, 00:23:16.159 { 00:23:16.159 "name": "BaseBdev3", 00:23:16.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.159 "is_configured": false, 00:23:16.159 "data_offset": 0, 00:23:16.159 "data_size": 0 00:23:16.159 } 00:23:16.159 ] 00:23:16.159 }' 00:23:16.159 17:02:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:16.159 17:02:04 -- common/autotest_common.sh@10 -- # set +x 00:23:17.094 17:02:05 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:17.094 [2024-11-05 17:02:05.812707] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:17.094 [2024-11-05 17:02:05.812858] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:23:17.094 17:02:05 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:17.352 [2024-11-05 17:02:06.036781] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:17.352 [2024-11-05 17:02:06.036956] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:17.352 [2024-11-05 17:02:06.037062] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:17.352 [2024-11-05 17:02:06.037131] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:17.352 [2024-11-05 17:02:06.037259] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:17.352 [2024-11-05 17:02:06.037323] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:17.352 17:02:06 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:17.610 [2024-11-05 17:02:06.255131] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:17.610 BaseBdev1 00:23:17.610 17:02:06 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:17.610 17:02:06 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:17.610 17:02:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:17.610 17:02:06 -- common/autotest_common.sh@899 -- # local i 00:23:17.610 17:02:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:17.611 17:02:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:17.611 17:02:06 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:17.611 17:02:06 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:17.869 [ 00:23:17.869 { 00:23:17.869 "name": "BaseBdev1", 00:23:17.869 "aliases": [ 00:23:17.869 "390b4bd0-16de-4db2-96d6-7f8988a7845e" 00:23:17.869 ], 00:23:17.869 "product_name": "Malloc disk", 00:23:17.869 "block_size": 512, 00:23:17.869 "num_blocks": 65536, 00:23:17.869 "uuid": "390b4bd0-16de-4db2-96d6-7f8988a7845e", 00:23:17.869 "assigned_rate_limits": { 00:23:17.869 "rw_ios_per_sec": 0, 00:23:17.869 "rw_mbytes_per_sec": 0, 00:23:17.869 "r_mbytes_per_sec": 0, 00:23:17.869 "w_mbytes_per_sec": 0 00:23:17.869 }, 00:23:17.869 "claimed": true, 00:23:17.869 "claim_type": "exclusive_write", 00:23:17.869 "zoned": false, 00:23:17.870 "supported_io_types": { 00:23:17.870 "read": true, 00:23:17.870 "write": true, 00:23:17.870 "unmap": true, 00:23:17.870 "write_zeroes": true, 00:23:17.870 "flush": true, 00:23:17.870 "reset": true, 00:23:17.870 "compare": false, 00:23:17.870 "compare_and_write": false, 00:23:17.870 "abort": true, 00:23:17.870 "nvme_admin": false, 00:23:17.870 "nvme_io": false 00:23:17.870 }, 00:23:17.870 "memory_domains": [ 00:23:17.870 { 00:23:17.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.870 "dma_device_type": 2 00:23:17.870 } 00:23:17.870 ], 00:23:17.870 "driver_specific": {} 00:23:17.870 } 00:23:17.870 ] 00:23:17.870 17:02:06 -- common/autotest_common.sh@905 -- # return 0 00:23:17.870 17:02:06 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:17.870 17:02:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:17.870 17:02:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:17.870 17:02:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:17.870 17:02:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:17.870 17:02:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:17.870 17:02:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:17.870 17:02:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:17.870 17:02:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:17.870 17:02:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:17.870 17:02:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.870 17:02:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:18.128 17:02:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:18.128 "name": "Existed_Raid", 00:23:18.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.128 "strip_size_kb": 64, 00:23:18.128 "state": "configuring", 00:23:18.128 "raid_level": "raid5f", 00:23:18.128 "superblock": false, 00:23:18.128 "num_base_bdevs": 3, 00:23:18.128 "num_base_bdevs_discovered": 1, 00:23:18.128 "num_base_bdevs_operational": 3, 00:23:18.128 "base_bdevs_list": [ 00:23:18.128 { 00:23:18.128 "name": "BaseBdev1", 00:23:18.128 "uuid": "390b4bd0-16de-4db2-96d6-7f8988a7845e", 00:23:18.128 "is_configured": true, 00:23:18.128 "data_offset": 0, 00:23:18.128 "data_size": 65536 00:23:18.128 }, 00:23:18.128 { 00:23:18.128 "name": "BaseBdev2", 00:23:18.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.128 "is_configured": false, 00:23:18.128 "data_offset": 0, 00:23:18.128 "data_size": 0 00:23:18.128 }, 00:23:18.128 { 00:23:18.128 "name": "BaseBdev3", 00:23:18.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.128 "is_configured": false, 00:23:18.128 "data_offset": 0, 00:23:18.128 "data_size": 0 00:23:18.128 } 00:23:18.128 ] 00:23:18.128 }' 00:23:18.128 17:02:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:18.128 17:02:06 -- common/autotest_common.sh@10 -- # set +x 00:23:18.695 17:02:07 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:18.695 [2024-11-05 17:02:07.579387] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:18.695 [2024-11-05 17:02:07.579547] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:18.959 17:02:07 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:23:18.959 17:02:07 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:18.959 [2024-11-05 17:02:07.839485] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:18.959 [2024-11-05 17:02:07.841373] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:18.959 [2024-11-05 17:02:07.841541] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:18.959 [2024-11-05 17:02:07.841640] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:18.959 [2024-11-05 17:02:07.841798] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:19.237 17:02:07 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:19.237 17:02:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:19.237 17:02:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:19.237 17:02:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:19.237 17:02:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:19.237 17:02:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:19.237 17:02:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:19.237 17:02:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:19.237 17:02:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:19.237 17:02:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:19.237 17:02:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:19.237 17:02:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:19.237 17:02:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.237 17:02:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:19.237 17:02:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:19.237 "name": "Existed_Raid", 00:23:19.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.237 "strip_size_kb": 64, 00:23:19.237 "state": "configuring", 00:23:19.237 "raid_level": "raid5f", 00:23:19.237 "superblock": false, 00:23:19.237 "num_base_bdevs": 3, 00:23:19.237 "num_base_bdevs_discovered": 1, 00:23:19.237 "num_base_bdevs_operational": 3, 00:23:19.237 "base_bdevs_list": [ 00:23:19.237 { 00:23:19.237 "name": "BaseBdev1", 00:23:19.237 "uuid": "390b4bd0-16de-4db2-96d6-7f8988a7845e", 00:23:19.237 "is_configured": true, 00:23:19.237 "data_offset": 0, 00:23:19.237 "data_size": 65536 00:23:19.237 }, 00:23:19.237 { 00:23:19.237 "name": "BaseBdev2", 00:23:19.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.237 "is_configured": false, 00:23:19.237 "data_offset": 0, 00:23:19.237 "data_size": 0 00:23:19.237 }, 00:23:19.237 { 00:23:19.237 "name": "BaseBdev3", 00:23:19.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.237 "is_configured": false, 00:23:19.237 "data_offset": 0, 00:23:19.237 "data_size": 0 00:23:19.237 } 00:23:19.237 ] 00:23:19.237 }' 00:23:19.237 17:02:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:19.237 17:02:08 -- common/autotest_common.sh@10 -- # set +x 00:23:19.829 17:02:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:20.087 [2024-11-05 17:02:08.822885] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:20.087 BaseBdev2 00:23:20.087 17:02:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:20.087 17:02:08 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:20.087 17:02:08 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:20.087 17:02:08 -- common/autotest_common.sh@899 -- # local i 00:23:20.087 17:02:08 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:20.087 17:02:08 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:20.087 17:02:08 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:20.345 17:02:09 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:20.605 [ 00:23:20.605 { 00:23:20.605 "name": "BaseBdev2", 00:23:20.605 "aliases": [ 00:23:20.605 "81d7b02c-5e60-4bc3-90bd-4171f31964dd" 00:23:20.605 ], 00:23:20.605 "product_name": "Malloc disk", 00:23:20.605 "block_size": 512, 00:23:20.605 "num_blocks": 65536, 00:23:20.605 "uuid": "81d7b02c-5e60-4bc3-90bd-4171f31964dd", 00:23:20.605 "assigned_rate_limits": { 00:23:20.605 "rw_ios_per_sec": 0, 00:23:20.605 "rw_mbytes_per_sec": 0, 00:23:20.605 "r_mbytes_per_sec": 0, 00:23:20.605 "w_mbytes_per_sec": 0 00:23:20.605 }, 00:23:20.605 "claimed": true, 00:23:20.605 "claim_type": "exclusive_write", 00:23:20.605 "zoned": false, 00:23:20.605 "supported_io_types": { 00:23:20.605 "read": true, 00:23:20.605 "write": true, 00:23:20.605 "unmap": true, 00:23:20.605 "write_zeroes": true, 00:23:20.605 "flush": true, 00:23:20.605 "reset": true, 00:23:20.605 "compare": false, 00:23:20.605 "compare_and_write": false, 00:23:20.605 "abort": true, 00:23:20.605 "nvme_admin": false, 00:23:20.605 "nvme_io": false 00:23:20.605 }, 00:23:20.605 "memory_domains": [ 00:23:20.605 { 00:23:20.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.605 "dma_device_type": 2 00:23:20.605 } 00:23:20.605 ], 00:23:20.605 "driver_specific": {} 00:23:20.605 } 00:23:20.605 ] 00:23:20.605 17:02:09 -- common/autotest_common.sh@905 -- # return 0 00:23:20.605 17:02:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:20.605 17:02:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:20.605 17:02:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:20.605 17:02:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:20.605 17:02:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:20.605 17:02:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:20.605 17:02:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:20.605 17:02:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:20.605 17:02:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:20.605 17:02:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:20.605 17:02:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:20.605 17:02:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:20.605 17:02:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.605 17:02:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:20.605 17:02:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:20.605 "name": "Existed_Raid", 00:23:20.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.605 "strip_size_kb": 64, 00:23:20.605 "state": "configuring", 00:23:20.605 "raid_level": "raid5f", 00:23:20.605 "superblock": false, 00:23:20.605 "num_base_bdevs": 3, 00:23:20.605 "num_base_bdevs_discovered": 2, 00:23:20.605 "num_base_bdevs_operational": 3, 00:23:20.605 "base_bdevs_list": [ 00:23:20.605 { 00:23:20.605 "name": "BaseBdev1", 00:23:20.605 "uuid": "390b4bd0-16de-4db2-96d6-7f8988a7845e", 00:23:20.605 "is_configured": true, 00:23:20.605 "data_offset": 0, 00:23:20.605 "data_size": 65536 00:23:20.605 }, 00:23:20.605 { 00:23:20.605 "name": "BaseBdev2", 00:23:20.605 "uuid": "81d7b02c-5e60-4bc3-90bd-4171f31964dd", 00:23:20.605 "is_configured": true, 00:23:20.605 "data_offset": 0, 00:23:20.605 "data_size": 65536 00:23:20.605 }, 00:23:20.605 { 00:23:20.605 "name": "BaseBdev3", 00:23:20.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.605 "is_configured": false, 00:23:20.605 "data_offset": 0, 00:23:20.605 "data_size": 0 00:23:20.605 } 00:23:20.605 ] 00:23:20.605 }' 00:23:20.605 17:02:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:20.605 17:02:09 -- common/autotest_common.sh@10 -- # set +x 00:23:21.173 17:02:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:21.439 [2024-11-05 17:02:10.211799] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:21.439 [2024-11-05 17:02:10.212024] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:23:21.439 [2024-11-05 17:02:10.212075] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:21.439 [2024-11-05 17:02:10.212293] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:23:21.439 [2024-11-05 17:02:10.216948] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:23:21.439 [2024-11-05 17:02:10.217088] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:23:21.439 [2024-11-05 17:02:10.217447] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.439 BaseBdev3 00:23:21.439 17:02:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:21.439 17:02:10 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:21.439 17:02:10 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:21.439 17:02:10 -- common/autotest_common.sh@899 -- # local i 00:23:21.439 17:02:10 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:21.439 17:02:10 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:21.439 17:02:10 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:21.703 17:02:10 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:21.962 [ 00:23:21.962 { 00:23:21.962 "name": "BaseBdev3", 00:23:21.962 "aliases": [ 00:23:21.962 "a55a6fe7-0262-4086-95fd-851f47318677" 00:23:21.962 ], 00:23:21.962 "product_name": "Malloc disk", 00:23:21.962 "block_size": 512, 00:23:21.962 "num_blocks": 65536, 00:23:21.962 "uuid": "a55a6fe7-0262-4086-95fd-851f47318677", 00:23:21.962 "assigned_rate_limits": { 00:23:21.962 "rw_ios_per_sec": 0, 00:23:21.962 "rw_mbytes_per_sec": 0, 00:23:21.962 "r_mbytes_per_sec": 0, 00:23:21.962 "w_mbytes_per_sec": 0 00:23:21.962 }, 00:23:21.962 "claimed": true, 00:23:21.962 "claim_type": "exclusive_write", 00:23:21.962 "zoned": false, 00:23:21.962 "supported_io_types": { 00:23:21.962 "read": true, 00:23:21.962 "write": true, 00:23:21.962 "unmap": true, 00:23:21.962 "write_zeroes": true, 00:23:21.962 "flush": true, 00:23:21.962 "reset": true, 00:23:21.962 "compare": false, 00:23:21.962 "compare_and_write": false, 00:23:21.962 "abort": true, 00:23:21.962 "nvme_admin": false, 00:23:21.962 "nvme_io": false 00:23:21.962 }, 00:23:21.962 "memory_domains": [ 00:23:21.962 { 00:23:21.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.962 "dma_device_type": 2 00:23:21.962 } 00:23:21.962 ], 00:23:21.962 "driver_specific": {} 00:23:21.962 } 00:23:21.962 ] 00:23:21.962 17:02:10 -- common/autotest_common.sh@905 -- # return 0 00:23:21.962 17:02:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:21.962 17:02:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:21.962 17:02:10 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:21.962 17:02:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:21.962 17:02:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:21.962 17:02:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:21.962 17:02:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:21.962 17:02:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:21.962 17:02:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:21.962 17:02:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:21.962 17:02:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:21.962 17:02:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:21.962 17:02:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.962 17:02:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.962 17:02:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:21.962 "name": "Existed_Raid", 00:23:21.962 "uuid": "a8ce52c1-774f-4a5a-bd78-ac9c15670304", 00:23:21.962 "strip_size_kb": 64, 00:23:21.962 "state": "online", 00:23:21.962 "raid_level": "raid5f", 00:23:21.962 "superblock": false, 00:23:21.962 "num_base_bdevs": 3, 00:23:21.962 "num_base_bdevs_discovered": 3, 00:23:21.962 "num_base_bdevs_operational": 3, 00:23:21.962 "base_bdevs_list": [ 00:23:21.962 { 00:23:21.962 "name": "BaseBdev1", 00:23:21.962 "uuid": "390b4bd0-16de-4db2-96d6-7f8988a7845e", 00:23:21.962 "is_configured": true, 00:23:21.962 "data_offset": 0, 00:23:21.962 "data_size": 65536 00:23:21.962 }, 00:23:21.962 { 00:23:21.962 "name": "BaseBdev2", 00:23:21.962 "uuid": "81d7b02c-5e60-4bc3-90bd-4171f31964dd", 00:23:21.962 "is_configured": true, 00:23:21.962 "data_offset": 0, 00:23:21.962 "data_size": 65536 00:23:21.962 }, 00:23:21.962 { 00:23:21.962 "name": "BaseBdev3", 00:23:21.962 "uuid": "a55a6fe7-0262-4086-95fd-851f47318677", 00:23:21.962 "is_configured": true, 00:23:21.962 "data_offset": 0, 00:23:21.962 "data_size": 65536 00:23:21.962 } 00:23:21.962 ] 00:23:21.962 }' 00:23:22.221 17:02:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:22.221 17:02:10 -- common/autotest_common.sh@10 -- # set +x 00:23:22.788 17:02:11 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:23.046 [2024-11-05 17:02:11.726473] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:23.046 17:02:11 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:23.046 17:02:11 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:23.046 17:02:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:23.046 17:02:11 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:23.046 17:02:11 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:23.046 17:02:11 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:23:23.046 17:02:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:23.046 17:02:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:23.046 17:02:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:23.046 17:02:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:23.046 17:02:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:23.046 17:02:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.046 17:02:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.046 17:02:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.046 17:02:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.046 17:02:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.046 17:02:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:23.304 17:02:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:23.304 "name": "Existed_Raid", 00:23:23.304 "uuid": "a8ce52c1-774f-4a5a-bd78-ac9c15670304", 00:23:23.304 "strip_size_kb": 64, 00:23:23.304 "state": "online", 00:23:23.304 "raid_level": "raid5f", 00:23:23.304 "superblock": false, 00:23:23.304 "num_base_bdevs": 3, 00:23:23.304 "num_base_bdevs_discovered": 2, 00:23:23.304 "num_base_bdevs_operational": 2, 00:23:23.304 "base_bdevs_list": [ 00:23:23.304 { 00:23:23.304 "name": null, 00:23:23.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.304 "is_configured": false, 00:23:23.304 "data_offset": 0, 00:23:23.304 "data_size": 65536 00:23:23.304 }, 00:23:23.304 { 00:23:23.304 "name": "BaseBdev2", 00:23:23.304 "uuid": "81d7b02c-5e60-4bc3-90bd-4171f31964dd", 00:23:23.304 "is_configured": true, 00:23:23.304 "data_offset": 0, 00:23:23.304 "data_size": 65536 00:23:23.304 }, 00:23:23.304 { 00:23:23.304 "name": "BaseBdev3", 00:23:23.304 "uuid": "a55a6fe7-0262-4086-95fd-851f47318677", 00:23:23.304 "is_configured": true, 00:23:23.304 "data_offset": 0, 00:23:23.304 "data_size": 65536 00:23:23.304 } 00:23:23.304 ] 00:23:23.304 }' 00:23:23.304 17:02:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:23.304 17:02:12 -- common/autotest_common.sh@10 -- # set +x 00:23:23.871 17:02:12 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:23.872 17:02:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:23.872 17:02:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:23.872 17:02:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.130 17:02:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:24.130 17:02:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:24.130 17:02:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:24.388 [2024-11-05 17:02:13.174344] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:24.388 [2024-11-05 17:02:13.174497] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:24.388 [2024-11-05 17:02:13.174675] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:24.388 17:02:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:24.388 17:02:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:24.388 17:02:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.388 17:02:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:24.646 17:02:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:24.646 17:02:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:24.646 17:02:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:24.905 [2024-11-05 17:02:13.681876] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:24.905 [2024-11-05 17:02:13.682050] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:23:24.905 17:02:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:24.905 17:02:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:24.905 17:02:13 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.905 17:02:13 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:25.163 17:02:13 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:25.163 17:02:13 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:25.163 17:02:13 -- bdev/bdev_raid.sh@287 -- # killprocess 126977 00:23:25.163 17:02:13 -- common/autotest_common.sh@936 -- # '[' -z 126977 ']' 00:23:25.163 17:02:13 -- common/autotest_common.sh@940 -- # kill -0 126977 00:23:25.163 17:02:13 -- common/autotest_common.sh@941 -- # uname 00:23:25.163 17:02:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:25.163 17:02:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126977 00:23:25.163 17:02:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:25.163 17:02:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:25.163 17:02:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126977' 00:23:25.163 killing process with pid 126977 00:23:25.163 17:02:14 -- common/autotest_common.sh@955 -- # kill 126977 00:23:25.163 [2024-11-05 17:02:14.012325] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:25.163 17:02:14 -- common/autotest_common.sh@960 -- # wait 126977 00:23:25.163 [2024-11-05 17:02:14.012605] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:26.098 ************************************ 00:23:26.098 END TEST raid5f_state_function_test 00:23:26.098 ************************************ 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:26.098 00:23:26.098 real 0m11.309s 00:23:26.098 user 0m19.955s 00:23:26.098 sys 0m1.339s 00:23:26.098 17:02:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:26.098 17:02:14 -- common/autotest_common.sh@10 -- # set +x 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:23:26.098 17:02:14 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:23:26.098 17:02:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:26.098 17:02:14 -- common/autotest_common.sh@10 -- # set +x 00:23:26.098 ************************************ 00:23:26.098 START TEST raid5f_state_function_test_sb 00:23:26.098 ************************************ 00:23:26.098 17:02:14 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 true 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@226 -- # raid_pid=127347 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127347' 00:23:26.098 Process raid pid: 127347 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127347 /var/tmp/spdk-raid.sock 00:23:26.098 17:02:14 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:26.098 17:02:14 -- common/autotest_common.sh@829 -- # '[' -z 127347 ']' 00:23:26.098 17:02:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:26.357 17:02:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.357 17:02:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:26.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:26.357 17:02:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.357 17:02:14 -- common/autotest_common.sh@10 -- # set +x 00:23:26.357 [2024-11-05 17:02:15.059868] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:26.357 [2024-11-05 17:02:15.061103] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.357 [2024-11-05 17:02:15.232001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.615 [2024-11-05 17:02:15.392655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.873 [2024-11-05 17:02:15.560638] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:27.131 17:02:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.132 17:02:15 -- common/autotest_common.sh@862 -- # return 0 00:23:27.132 17:02:15 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:27.390 [2024-11-05 17:02:16.143939] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:27.390 [2024-11-05 17:02:16.144183] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:27.390 [2024-11-05 17:02:16.144294] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:27.390 [2024-11-05 17:02:16.144427] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:27.390 [2024-11-05 17:02:16.144537] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:27.390 [2024-11-05 17:02:16.144624] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:27.390 17:02:16 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:27.390 17:02:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:27.390 17:02:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:27.390 17:02:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:27.390 17:02:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:27.390 17:02:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:27.390 17:02:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:27.390 17:02:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:27.390 17:02:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:27.390 17:02:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:27.390 17:02:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.390 17:02:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.648 17:02:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:27.648 "name": "Existed_Raid", 00:23:27.648 "uuid": "f9664ab6-00b8-45e9-b0c5-52292bc4e806", 00:23:27.648 "strip_size_kb": 64, 00:23:27.648 "state": "configuring", 00:23:27.648 "raid_level": "raid5f", 00:23:27.648 "superblock": true, 00:23:27.648 "num_base_bdevs": 3, 00:23:27.648 "num_base_bdevs_discovered": 0, 00:23:27.648 "num_base_bdevs_operational": 3, 00:23:27.648 "base_bdevs_list": [ 00:23:27.648 { 00:23:27.648 "name": "BaseBdev1", 00:23:27.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.648 "is_configured": false, 00:23:27.648 "data_offset": 0, 00:23:27.648 "data_size": 0 00:23:27.648 }, 00:23:27.648 { 00:23:27.648 "name": "BaseBdev2", 00:23:27.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.648 "is_configured": false, 00:23:27.648 "data_offset": 0, 00:23:27.648 "data_size": 0 00:23:27.648 }, 00:23:27.648 { 00:23:27.648 "name": "BaseBdev3", 00:23:27.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.648 "is_configured": false, 00:23:27.648 "data_offset": 0, 00:23:27.648 "data_size": 0 00:23:27.648 } 00:23:27.648 ] 00:23:27.648 }' 00:23:27.648 17:02:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:27.648 17:02:16 -- common/autotest_common.sh@10 -- # set +x 00:23:28.213 17:02:17 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:28.471 [2024-11-05 17:02:17.223989] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:28.471 [2024-11-05 17:02:17.224157] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:23:28.471 17:02:17 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:28.727 [2024-11-05 17:02:17.404069] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:28.727 [2024-11-05 17:02:17.404244] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:28.727 [2024-11-05 17:02:17.404345] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:28.727 [2024-11-05 17:02:17.404502] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:28.727 [2024-11-05 17:02:17.404624] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:28.727 [2024-11-05 17:02:17.404689] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:28.727 17:02:17 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:28.727 [2024-11-05 17:02:17.610016] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:28.727 BaseBdev1 00:23:28.727 17:02:17 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:28.727 17:02:17 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:28.727 17:02:17 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:28.727 17:02:17 -- common/autotest_common.sh@899 -- # local i 00:23:28.727 17:02:17 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:28.727 17:02:17 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:28.727 17:02:17 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:28.984 17:02:17 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:29.242 [ 00:23:29.242 { 00:23:29.242 "name": "BaseBdev1", 00:23:29.242 "aliases": [ 00:23:29.242 "4a94562f-1ffb-4233-a4d5-865fea3ee4eb" 00:23:29.242 ], 00:23:29.242 "product_name": "Malloc disk", 00:23:29.242 "block_size": 512, 00:23:29.242 "num_blocks": 65536, 00:23:29.242 "uuid": "4a94562f-1ffb-4233-a4d5-865fea3ee4eb", 00:23:29.242 "assigned_rate_limits": { 00:23:29.242 "rw_ios_per_sec": 0, 00:23:29.242 "rw_mbytes_per_sec": 0, 00:23:29.242 "r_mbytes_per_sec": 0, 00:23:29.242 "w_mbytes_per_sec": 0 00:23:29.242 }, 00:23:29.242 "claimed": true, 00:23:29.242 "claim_type": "exclusive_write", 00:23:29.242 "zoned": false, 00:23:29.242 "supported_io_types": { 00:23:29.242 "read": true, 00:23:29.242 "write": true, 00:23:29.242 "unmap": true, 00:23:29.242 "write_zeroes": true, 00:23:29.242 "flush": true, 00:23:29.242 "reset": true, 00:23:29.242 "compare": false, 00:23:29.242 "compare_and_write": false, 00:23:29.242 "abort": true, 00:23:29.242 "nvme_admin": false, 00:23:29.242 "nvme_io": false 00:23:29.242 }, 00:23:29.242 "memory_domains": [ 00:23:29.242 { 00:23:29.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.242 "dma_device_type": 2 00:23:29.242 } 00:23:29.242 ], 00:23:29.242 "driver_specific": {} 00:23:29.242 } 00:23:29.242 ] 00:23:29.242 17:02:18 -- common/autotest_common.sh@905 -- # return 0 00:23:29.242 17:02:18 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:29.242 17:02:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:29.242 17:02:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:29.242 17:02:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:29.242 17:02:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:29.242 17:02:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:29.242 17:02:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:29.242 17:02:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:29.242 17:02:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:29.242 17:02:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:29.242 17:02:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.242 17:02:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:29.500 17:02:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:29.500 "name": "Existed_Raid", 00:23:29.500 "uuid": "58a48589-d3d5-458f-bb6f-9a4aec7acdce", 00:23:29.500 "strip_size_kb": 64, 00:23:29.500 "state": "configuring", 00:23:29.500 "raid_level": "raid5f", 00:23:29.500 "superblock": true, 00:23:29.500 "num_base_bdevs": 3, 00:23:29.500 "num_base_bdevs_discovered": 1, 00:23:29.500 "num_base_bdevs_operational": 3, 00:23:29.500 "base_bdevs_list": [ 00:23:29.500 { 00:23:29.500 "name": "BaseBdev1", 00:23:29.500 "uuid": "4a94562f-1ffb-4233-a4d5-865fea3ee4eb", 00:23:29.500 "is_configured": true, 00:23:29.500 "data_offset": 2048, 00:23:29.500 "data_size": 63488 00:23:29.500 }, 00:23:29.500 { 00:23:29.500 "name": "BaseBdev2", 00:23:29.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.500 "is_configured": false, 00:23:29.500 "data_offset": 0, 00:23:29.500 "data_size": 0 00:23:29.500 }, 00:23:29.500 { 00:23:29.500 "name": "BaseBdev3", 00:23:29.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.500 "is_configured": false, 00:23:29.500 "data_offset": 0, 00:23:29.500 "data_size": 0 00:23:29.500 } 00:23:29.500 ] 00:23:29.500 }' 00:23:29.500 17:02:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:29.500 17:02:18 -- common/autotest_common.sh@10 -- # set +x 00:23:30.066 17:02:18 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:30.323 [2024-11-05 17:02:19.018279] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:30.323 [2024-11-05 17:02:19.018434] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:30.323 17:02:19 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:23:30.323 17:02:19 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:30.581 17:02:19 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:30.839 BaseBdev1 00:23:30.839 17:02:19 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:23:30.839 17:02:19 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:30.839 17:02:19 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:30.839 17:02:19 -- common/autotest_common.sh@899 -- # local i 00:23:30.839 17:02:19 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:30.839 17:02:19 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:30.839 17:02:19 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:30.839 17:02:19 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:31.096 [ 00:23:31.096 { 00:23:31.096 "name": "BaseBdev1", 00:23:31.096 "aliases": [ 00:23:31.096 "0afbeeaa-1ce8-44b7-a27d-315303e01778" 00:23:31.096 ], 00:23:31.096 "product_name": "Malloc disk", 00:23:31.096 "block_size": 512, 00:23:31.096 "num_blocks": 65536, 00:23:31.096 "uuid": "0afbeeaa-1ce8-44b7-a27d-315303e01778", 00:23:31.096 "assigned_rate_limits": { 00:23:31.096 "rw_ios_per_sec": 0, 00:23:31.096 "rw_mbytes_per_sec": 0, 00:23:31.096 "r_mbytes_per_sec": 0, 00:23:31.096 "w_mbytes_per_sec": 0 00:23:31.096 }, 00:23:31.096 "claimed": false, 00:23:31.096 "zoned": false, 00:23:31.096 "supported_io_types": { 00:23:31.096 "read": true, 00:23:31.096 "write": true, 00:23:31.096 "unmap": true, 00:23:31.096 "write_zeroes": true, 00:23:31.096 "flush": true, 00:23:31.096 "reset": true, 00:23:31.096 "compare": false, 00:23:31.096 "compare_and_write": false, 00:23:31.096 "abort": true, 00:23:31.096 "nvme_admin": false, 00:23:31.096 "nvme_io": false 00:23:31.096 }, 00:23:31.096 "memory_domains": [ 00:23:31.096 { 00:23:31.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.096 "dma_device_type": 2 00:23:31.096 } 00:23:31.096 ], 00:23:31.096 "driver_specific": {} 00:23:31.096 } 00:23:31.096 ] 00:23:31.096 17:02:19 -- common/autotest_common.sh@905 -- # return 0 00:23:31.096 17:02:19 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:31.354 [2024-11-05 17:02:20.034108] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:31.354 [2024-11-05 17:02:20.036235] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:31.354 [2024-11-05 17:02:20.036432] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:31.354 [2024-11-05 17:02:20.036566] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:31.354 [2024-11-05 17:02:20.036750] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:31.354 17:02:20 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:31.354 17:02:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:31.354 17:02:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:31.354 17:02:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:31.354 17:02:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:31.354 17:02:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:31.354 17:02:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:31.354 17:02:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:31.354 17:02:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:31.354 17:02:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:31.354 17:02:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:31.354 17:02:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:31.354 17:02:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.354 17:02:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.613 17:02:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:31.613 "name": "Existed_Raid", 00:23:31.613 "uuid": "f12815c2-3b1f-44b4-b0de-fadec8b9e34f", 00:23:31.613 "strip_size_kb": 64, 00:23:31.613 "state": "configuring", 00:23:31.613 "raid_level": "raid5f", 00:23:31.613 "superblock": true, 00:23:31.613 "num_base_bdevs": 3, 00:23:31.613 "num_base_bdevs_discovered": 1, 00:23:31.613 "num_base_bdevs_operational": 3, 00:23:31.613 "base_bdevs_list": [ 00:23:31.613 { 00:23:31.613 "name": "BaseBdev1", 00:23:31.613 "uuid": "0afbeeaa-1ce8-44b7-a27d-315303e01778", 00:23:31.613 "is_configured": true, 00:23:31.613 "data_offset": 2048, 00:23:31.613 "data_size": 63488 00:23:31.613 }, 00:23:31.613 { 00:23:31.613 "name": "BaseBdev2", 00:23:31.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.613 "is_configured": false, 00:23:31.613 "data_offset": 0, 00:23:31.613 "data_size": 0 00:23:31.613 }, 00:23:31.613 { 00:23:31.613 "name": "BaseBdev3", 00:23:31.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.613 "is_configured": false, 00:23:31.613 "data_offset": 0, 00:23:31.613 "data_size": 0 00:23:31.613 } 00:23:31.613 ] 00:23:31.613 }' 00:23:31.613 17:02:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:31.613 17:02:20 -- common/autotest_common.sh@10 -- # set +x 00:23:32.179 17:02:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:32.437 [2024-11-05 17:02:21.239594] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:32.437 BaseBdev2 00:23:32.437 17:02:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:32.437 17:02:21 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:32.437 17:02:21 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:32.437 17:02:21 -- common/autotest_common.sh@899 -- # local i 00:23:32.437 17:02:21 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:32.437 17:02:21 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:32.437 17:02:21 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:32.695 17:02:21 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:32.953 [ 00:23:32.953 { 00:23:32.953 "name": "BaseBdev2", 00:23:32.953 "aliases": [ 00:23:32.953 "d20518b9-a561-47af-aceb-b92bf0a85d4c" 00:23:32.953 ], 00:23:32.953 "product_name": "Malloc disk", 00:23:32.953 "block_size": 512, 00:23:32.953 "num_blocks": 65536, 00:23:32.953 "uuid": "d20518b9-a561-47af-aceb-b92bf0a85d4c", 00:23:32.953 "assigned_rate_limits": { 00:23:32.953 "rw_ios_per_sec": 0, 00:23:32.953 "rw_mbytes_per_sec": 0, 00:23:32.953 "r_mbytes_per_sec": 0, 00:23:32.953 "w_mbytes_per_sec": 0 00:23:32.953 }, 00:23:32.953 "claimed": true, 00:23:32.953 "claim_type": "exclusive_write", 00:23:32.953 "zoned": false, 00:23:32.953 "supported_io_types": { 00:23:32.953 "read": true, 00:23:32.953 "write": true, 00:23:32.953 "unmap": true, 00:23:32.953 "write_zeroes": true, 00:23:32.953 "flush": true, 00:23:32.953 "reset": true, 00:23:32.953 "compare": false, 00:23:32.953 "compare_and_write": false, 00:23:32.953 "abort": true, 00:23:32.953 "nvme_admin": false, 00:23:32.953 "nvme_io": false 00:23:32.953 }, 00:23:32.953 "memory_domains": [ 00:23:32.953 { 00:23:32.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:32.953 "dma_device_type": 2 00:23:32.953 } 00:23:32.953 ], 00:23:32.953 "driver_specific": {} 00:23:32.953 } 00:23:32.953 ] 00:23:32.953 17:02:21 -- common/autotest_common.sh@905 -- # return 0 00:23:32.953 17:02:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:32.953 17:02:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:32.953 17:02:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:32.953 17:02:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:32.953 17:02:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:32.953 17:02:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:32.953 17:02:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:32.953 17:02:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:32.953 17:02:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:32.953 17:02:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:32.953 17:02:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:32.953 17:02:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:32.953 17:02:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.953 17:02:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:33.212 17:02:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:33.212 "name": "Existed_Raid", 00:23:33.212 "uuid": "f12815c2-3b1f-44b4-b0de-fadec8b9e34f", 00:23:33.212 "strip_size_kb": 64, 00:23:33.212 "state": "configuring", 00:23:33.212 "raid_level": "raid5f", 00:23:33.212 "superblock": true, 00:23:33.212 "num_base_bdevs": 3, 00:23:33.212 "num_base_bdevs_discovered": 2, 00:23:33.212 "num_base_bdevs_operational": 3, 00:23:33.212 "base_bdevs_list": [ 00:23:33.212 { 00:23:33.212 "name": "BaseBdev1", 00:23:33.212 "uuid": "0afbeeaa-1ce8-44b7-a27d-315303e01778", 00:23:33.212 "is_configured": true, 00:23:33.212 "data_offset": 2048, 00:23:33.212 "data_size": 63488 00:23:33.212 }, 00:23:33.212 { 00:23:33.212 "name": "BaseBdev2", 00:23:33.212 "uuid": "d20518b9-a561-47af-aceb-b92bf0a85d4c", 00:23:33.212 "is_configured": true, 00:23:33.212 "data_offset": 2048, 00:23:33.212 "data_size": 63488 00:23:33.212 }, 00:23:33.212 { 00:23:33.212 "name": "BaseBdev3", 00:23:33.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.212 "is_configured": false, 00:23:33.212 "data_offset": 0, 00:23:33.212 "data_size": 0 00:23:33.212 } 00:23:33.212 ] 00:23:33.212 }' 00:23:33.212 17:02:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:33.212 17:02:21 -- common/autotest_common.sh@10 -- # set +x 00:23:33.778 17:02:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:34.036 [2024-11-05 17:02:22.759979] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:34.036 [2024-11-05 17:02:22.760400] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:23:34.036 [2024-11-05 17:02:22.760543] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:34.036 [2024-11-05 17:02:22.760724] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:23:34.036 BaseBdev3 00:23:34.036 [2024-11-05 17:02:22.765153] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:23:34.036 [2024-11-05 17:02:22.765300] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:23:34.036 [2024-11-05 17:02:22.765579] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:34.036 17:02:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:34.036 17:02:22 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:34.036 17:02:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:34.036 17:02:22 -- common/autotest_common.sh@899 -- # local i 00:23:34.036 17:02:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:34.036 17:02:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:34.036 17:02:22 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:34.293 17:02:22 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:34.293 [ 00:23:34.293 { 00:23:34.293 "name": "BaseBdev3", 00:23:34.293 "aliases": [ 00:23:34.293 "5e4780b4-3d83-4de0-9c07-e9050ce10402" 00:23:34.293 ], 00:23:34.293 "product_name": "Malloc disk", 00:23:34.293 "block_size": 512, 00:23:34.293 "num_blocks": 65536, 00:23:34.293 "uuid": "5e4780b4-3d83-4de0-9c07-e9050ce10402", 00:23:34.293 "assigned_rate_limits": { 00:23:34.293 "rw_ios_per_sec": 0, 00:23:34.293 "rw_mbytes_per_sec": 0, 00:23:34.293 "r_mbytes_per_sec": 0, 00:23:34.293 "w_mbytes_per_sec": 0 00:23:34.293 }, 00:23:34.294 "claimed": true, 00:23:34.294 "claim_type": "exclusive_write", 00:23:34.294 "zoned": false, 00:23:34.294 "supported_io_types": { 00:23:34.294 "read": true, 00:23:34.294 "write": true, 00:23:34.294 "unmap": true, 00:23:34.294 "write_zeroes": true, 00:23:34.294 "flush": true, 00:23:34.294 "reset": true, 00:23:34.294 "compare": false, 00:23:34.294 "compare_and_write": false, 00:23:34.294 "abort": true, 00:23:34.294 "nvme_admin": false, 00:23:34.294 "nvme_io": false 00:23:34.294 }, 00:23:34.294 "memory_domains": [ 00:23:34.294 { 00:23:34.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.294 "dma_device_type": 2 00:23:34.294 } 00:23:34.294 ], 00:23:34.294 "driver_specific": {} 00:23:34.294 } 00:23:34.294 ] 00:23:34.294 17:02:23 -- common/autotest_common.sh@905 -- # return 0 00:23:34.294 17:02:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:34.294 17:02:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:34.294 17:02:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:34.294 17:02:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:34.294 17:02:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:34.294 17:02:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:34.294 17:02:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:34.294 17:02:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:34.294 17:02:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:34.294 17:02:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:34.294 17:02:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:34.294 17:02:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:34.294 17:02:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.294 17:02:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:34.551 17:02:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:34.551 "name": "Existed_Raid", 00:23:34.551 "uuid": "f12815c2-3b1f-44b4-b0de-fadec8b9e34f", 00:23:34.551 "strip_size_kb": 64, 00:23:34.551 "state": "online", 00:23:34.551 "raid_level": "raid5f", 00:23:34.551 "superblock": true, 00:23:34.551 "num_base_bdevs": 3, 00:23:34.551 "num_base_bdevs_discovered": 3, 00:23:34.551 "num_base_bdevs_operational": 3, 00:23:34.551 "base_bdevs_list": [ 00:23:34.551 { 00:23:34.551 "name": "BaseBdev1", 00:23:34.551 "uuid": "0afbeeaa-1ce8-44b7-a27d-315303e01778", 00:23:34.551 "is_configured": true, 00:23:34.552 "data_offset": 2048, 00:23:34.552 "data_size": 63488 00:23:34.552 }, 00:23:34.552 { 00:23:34.552 "name": "BaseBdev2", 00:23:34.552 "uuid": "d20518b9-a561-47af-aceb-b92bf0a85d4c", 00:23:34.552 "is_configured": true, 00:23:34.552 "data_offset": 2048, 00:23:34.552 "data_size": 63488 00:23:34.552 }, 00:23:34.552 { 00:23:34.552 "name": "BaseBdev3", 00:23:34.552 "uuid": "5e4780b4-3d83-4de0-9c07-e9050ce10402", 00:23:34.552 "is_configured": true, 00:23:34.552 "data_offset": 2048, 00:23:34.552 "data_size": 63488 00:23:34.552 } 00:23:34.552 ] 00:23:34.552 }' 00:23:34.552 17:02:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:34.552 17:02:23 -- common/autotest_common.sh@10 -- # set +x 00:23:35.126 17:02:23 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:35.412 [2024-11-05 17:02:24.182379] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:35.412 17:02:24 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:35.412 17:02:24 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:35.412 17:02:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:35.412 17:02:24 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:35.412 17:02:24 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:35.412 17:02:24 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:23:35.412 17:02:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:35.412 17:02:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:35.412 17:02:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:35.412 17:02:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:35.412 17:02:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:35.412 17:02:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:35.412 17:02:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:35.412 17:02:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:35.412 17:02:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:35.412 17:02:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.412 17:02:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:35.670 17:02:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:35.670 "name": "Existed_Raid", 00:23:35.670 "uuid": "f12815c2-3b1f-44b4-b0de-fadec8b9e34f", 00:23:35.670 "strip_size_kb": 64, 00:23:35.670 "state": "online", 00:23:35.670 "raid_level": "raid5f", 00:23:35.670 "superblock": true, 00:23:35.670 "num_base_bdevs": 3, 00:23:35.670 "num_base_bdevs_discovered": 2, 00:23:35.670 "num_base_bdevs_operational": 2, 00:23:35.670 "base_bdevs_list": [ 00:23:35.670 { 00:23:35.670 "name": null, 00:23:35.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.670 "is_configured": false, 00:23:35.670 "data_offset": 2048, 00:23:35.670 "data_size": 63488 00:23:35.670 }, 00:23:35.670 { 00:23:35.670 "name": "BaseBdev2", 00:23:35.670 "uuid": "d20518b9-a561-47af-aceb-b92bf0a85d4c", 00:23:35.670 "is_configured": true, 00:23:35.670 "data_offset": 2048, 00:23:35.670 "data_size": 63488 00:23:35.670 }, 00:23:35.670 { 00:23:35.670 "name": "BaseBdev3", 00:23:35.670 "uuid": "5e4780b4-3d83-4de0-9c07-e9050ce10402", 00:23:35.670 "is_configured": true, 00:23:35.670 "data_offset": 2048, 00:23:35.670 "data_size": 63488 00:23:35.670 } 00:23:35.670 ] 00:23:35.670 }' 00:23:35.670 17:02:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:35.670 17:02:24 -- common/autotest_common.sh@10 -- # set +x 00:23:36.238 17:02:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:36.238 17:02:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:36.238 17:02:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.238 17:02:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:36.496 17:02:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:36.496 17:02:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:36.496 17:02:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:36.754 [2024-11-05 17:02:25.525839] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:36.754 [2024-11-05 17:02:25.525999] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:36.754 [2024-11-05 17:02:25.526180] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:36.754 17:02:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:36.754 17:02:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:36.754 17:02:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:36.754 17:02:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.012 17:02:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:37.012 17:02:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:37.012 17:02:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:37.270 [2024-11-05 17:02:26.042297] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:37.270 [2024-11-05 17:02:26.042500] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:23:37.270 17:02:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:37.270 17:02:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:37.270 17:02:26 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.270 17:02:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:37.528 17:02:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:37.528 17:02:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:37.528 17:02:26 -- bdev/bdev_raid.sh@287 -- # killprocess 127347 00:23:37.528 17:02:26 -- common/autotest_common.sh@936 -- # '[' -z 127347 ']' 00:23:37.528 17:02:26 -- common/autotest_common.sh@940 -- # kill -0 127347 00:23:37.528 17:02:26 -- common/autotest_common.sh@941 -- # uname 00:23:37.528 17:02:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:37.528 17:02:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127347 00:23:37.528 17:02:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:37.528 17:02:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:37.528 17:02:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127347' 00:23:37.528 killing process with pid 127347 00:23:37.528 17:02:26 -- common/autotest_common.sh@955 -- # kill 127347 00:23:37.528 [2024-11-05 17:02:26.372998] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:37.528 17:02:26 -- common/autotest_common.sh@960 -- # wait 127347 00:23:37.528 [2024-11-05 17:02:26.373222] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:38.463 ************************************ 00:23:38.463 END TEST raid5f_state_function_test_sb 00:23:38.463 ************************************ 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:38.463 00:23:38.463 real 0m12.307s 00:23:38.463 user 0m21.710s 00:23:38.463 sys 0m1.421s 00:23:38.463 17:02:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:38.463 17:02:27 -- common/autotest_common.sh@10 -- # set +x 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:23:38.463 17:02:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:23:38.463 17:02:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:38.463 17:02:27 -- common/autotest_common.sh@10 -- # set +x 00:23:38.463 ************************************ 00:23:38.463 START TEST raid5f_superblock_test 00:23:38.463 ************************************ 00:23:38.463 17:02:27 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 3 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:23:38.463 17:02:27 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:23:38.721 17:02:27 -- bdev/bdev_raid.sh@357 -- # raid_pid=127730 00:23:38.721 17:02:27 -- bdev/bdev_raid.sh@358 -- # waitforlisten 127730 /var/tmp/spdk-raid.sock 00:23:38.721 17:02:27 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:38.721 17:02:27 -- common/autotest_common.sh@829 -- # '[' -z 127730 ']' 00:23:38.721 17:02:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:38.721 17:02:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:38.721 17:02:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:38.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:38.721 17:02:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:38.721 17:02:27 -- common/autotest_common.sh@10 -- # set +x 00:23:38.721 [2024-11-05 17:02:27.428159] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:38.721 [2024-11-05 17:02:27.428691] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127730 ] 00:23:38.721 [2024-11-05 17:02:27.599536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.979 [2024-11-05 17:02:27.805875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.238 [2024-11-05 17:02:27.970410] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:39.804 17:02:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:39.804 17:02:28 -- common/autotest_common.sh@862 -- # return 0 00:23:39.804 17:02:28 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:23:39.804 17:02:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:39.804 17:02:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:23:39.804 17:02:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:23:39.804 17:02:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:39.804 17:02:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:39.804 17:02:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:39.804 17:02:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:39.804 17:02:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:39.804 malloc1 00:23:39.804 17:02:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:40.062 [2024-11-05 17:02:28.855865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:40.062 [2024-11-05 17:02:28.856135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:40.062 [2024-11-05 17:02:28.856209] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:23:40.062 [2024-11-05 17:02:28.856479] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:40.062 [2024-11-05 17:02:28.858886] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:40.062 [2024-11-05 17:02:28.859077] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:40.062 pt1 00:23:40.062 17:02:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:40.062 17:02:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:40.062 17:02:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:23:40.062 17:02:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:23:40.062 17:02:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:40.062 17:02:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:40.062 17:02:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:40.062 17:02:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:40.062 17:02:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:40.320 malloc2 00:23:40.320 17:02:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:40.578 [2024-11-05 17:02:29.300506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:40.578 [2024-11-05 17:02:29.301041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:40.578 [2024-11-05 17:02:29.301121] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:40.578 [2024-11-05 17:02:29.301375] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:40.578 [2024-11-05 17:02:29.303509] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:40.578 [2024-11-05 17:02:29.303693] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:40.578 pt2 00:23:40.578 17:02:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:40.578 17:02:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:40.578 17:02:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:23:40.578 17:02:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:23:40.578 17:02:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:40.578 17:02:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:40.578 17:02:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:40.578 17:02:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:40.578 17:02:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:40.855 malloc3 00:23:40.855 17:02:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:41.115 [2024-11-05 17:02:29.846978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:41.115 [2024-11-05 17:02:29.847191] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:41.115 [2024-11-05 17:02:29.847273] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:41.115 [2024-11-05 17:02:29.847529] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:41.115 [2024-11-05 17:02:29.849696] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:41.115 [2024-11-05 17:02:29.849885] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:41.115 pt3 00:23:41.115 17:02:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:41.115 17:02:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:41.115 17:02:29 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:23:41.373 [2024-11-05 17:02:30.091113] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:41.373 [2024-11-05 17:02:30.093252] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:41.373 [2024-11-05 17:02:30.093450] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:41.373 [2024-11-05 17:02:30.093772] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:23:41.373 [2024-11-05 17:02:30.093923] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:41.373 [2024-11-05 17:02:30.094086] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:41.373 [2024-11-05 17:02:30.098391] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:23:41.373 [2024-11-05 17:02:30.098524] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:23:41.373 [2024-11-05 17:02:30.098838] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.373 17:02:30 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:41.373 17:02:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:41.373 17:02:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:41.373 17:02:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:41.373 17:02:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:41.373 17:02:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:41.373 17:02:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:41.373 17:02:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:41.373 17:02:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:41.373 17:02:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:41.373 17:02:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.373 17:02:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.632 17:02:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:41.632 "name": "raid_bdev1", 00:23:41.632 "uuid": "714bf854-67a7-4e59-a884-602a78fab7fa", 00:23:41.632 "strip_size_kb": 64, 00:23:41.632 "state": "online", 00:23:41.632 "raid_level": "raid5f", 00:23:41.632 "superblock": true, 00:23:41.632 "num_base_bdevs": 3, 00:23:41.632 "num_base_bdevs_discovered": 3, 00:23:41.632 "num_base_bdevs_operational": 3, 00:23:41.632 "base_bdevs_list": [ 00:23:41.632 { 00:23:41.632 "name": "pt1", 00:23:41.632 "uuid": "a48adccd-76fa-5421-9963-e9cad1367fd2", 00:23:41.632 "is_configured": true, 00:23:41.632 "data_offset": 2048, 00:23:41.632 "data_size": 63488 00:23:41.632 }, 00:23:41.632 { 00:23:41.632 "name": "pt2", 00:23:41.632 "uuid": "09396314-fb62-5b23-a627-a716aeffe387", 00:23:41.632 "is_configured": true, 00:23:41.632 "data_offset": 2048, 00:23:41.632 "data_size": 63488 00:23:41.632 }, 00:23:41.632 { 00:23:41.632 "name": "pt3", 00:23:41.632 "uuid": "d21785f4-f6ae-5048-86d4-56e1c8050826", 00:23:41.632 "is_configured": true, 00:23:41.632 "data_offset": 2048, 00:23:41.632 "data_size": 63488 00:23:41.632 } 00:23:41.632 ] 00:23:41.632 }' 00:23:41.632 17:02:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:41.632 17:02:30 -- common/autotest_common.sh@10 -- # set +x 00:23:42.198 17:02:30 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:42.198 17:02:30 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:23:42.456 [2024-11-05 17:02:31.144018] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:42.456 17:02:31 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=714bf854-67a7-4e59-a884-602a78fab7fa 00:23:42.456 17:02:31 -- bdev/bdev_raid.sh@380 -- # '[' -z 714bf854-67a7-4e59-a884-602a78fab7fa ']' 00:23:42.456 17:02:31 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:42.714 [2024-11-05 17:02:31.387925] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:42.714 [2024-11-05 17:02:31.388072] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:42.714 [2024-11-05 17:02:31.388220] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:42.714 [2024-11-05 17:02:31.388408] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:42.714 [2024-11-05 17:02:31.388512] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:23:42.714 17:02:31 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.714 17:02:31 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:23:42.971 17:02:31 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:23:42.971 17:02:31 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:23:42.971 17:02:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:42.971 17:02:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:42.971 17:02:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:42.971 17:02:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:43.229 17:02:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:43.229 17:02:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:43.487 17:02:32 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:43.487 17:02:32 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:43.745 17:02:32 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:23:43.745 17:02:32 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:43.745 17:02:32 -- common/autotest_common.sh@650 -- # local es=0 00:23:43.745 17:02:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:43.745 17:02:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:43.745 17:02:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.745 17:02:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:43.745 17:02:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.745 17:02:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:43.745 17:02:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.745 17:02:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:43.745 17:02:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:43.745 17:02:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:44.003 [2024-11-05 17:02:32.728137] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:44.003 [2024-11-05 17:02:32.729983] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:44.003 [2024-11-05 17:02:32.730155] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:44.003 [2024-11-05 17:02:32.730244] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:23:44.003 [2024-11-05 17:02:32.730513] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:23:44.003 [2024-11-05 17:02:32.730660] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:23:44.003 [2024-11-05 17:02:32.730743] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:44.003 [2024-11-05 17:02:32.730862] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:23:44.003 request: 00:23:44.003 { 00:23:44.003 "name": "raid_bdev1", 00:23:44.003 "raid_level": "raid5f", 00:23:44.003 "base_bdevs": [ 00:23:44.003 "malloc1", 00:23:44.003 "malloc2", 00:23:44.003 "malloc3" 00:23:44.003 ], 00:23:44.003 "superblock": false, 00:23:44.003 "strip_size_kb": 64, 00:23:44.003 "method": "bdev_raid_create", 00:23:44.003 "req_id": 1 00:23:44.003 } 00:23:44.003 Got JSON-RPC error response 00:23:44.003 response: 00:23:44.003 { 00:23:44.003 "code": -17, 00:23:44.003 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:44.003 } 00:23:44.003 17:02:32 -- common/autotest_common.sh@653 -- # es=1 00:23:44.003 17:02:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:44.003 17:02:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:44.003 17:02:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:44.003 17:02:32 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:23:44.003 17:02:32 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.261 17:02:32 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:23:44.261 17:02:32 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:23:44.261 17:02:32 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:44.261 [2024-11-05 17:02:33.108159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:44.261 [2024-11-05 17:02:33.108338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.261 [2024-11-05 17:02:33.108410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:44.261 [2024-11-05 17:02:33.108512] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.261 [2024-11-05 17:02:33.110606] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.261 [2024-11-05 17:02:33.110763] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:44.261 [2024-11-05 17:02:33.110991] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:44.261 [2024-11-05 17:02:33.111129] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:44.261 pt1 00:23:44.261 17:02:33 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:44.261 17:02:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:44.261 17:02:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:44.261 17:02:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:44.261 17:02:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:44.261 17:02:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:44.261 17:02:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:44.261 17:02:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:44.261 17:02:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:44.261 17:02:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:44.261 17:02:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.261 17:02:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.524 17:02:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:44.524 "name": "raid_bdev1", 00:23:44.524 "uuid": "714bf854-67a7-4e59-a884-602a78fab7fa", 00:23:44.524 "strip_size_kb": 64, 00:23:44.524 "state": "configuring", 00:23:44.524 "raid_level": "raid5f", 00:23:44.524 "superblock": true, 00:23:44.524 "num_base_bdevs": 3, 00:23:44.524 "num_base_bdevs_discovered": 1, 00:23:44.524 "num_base_bdevs_operational": 3, 00:23:44.524 "base_bdevs_list": [ 00:23:44.524 { 00:23:44.524 "name": "pt1", 00:23:44.524 "uuid": "a48adccd-76fa-5421-9963-e9cad1367fd2", 00:23:44.524 "is_configured": true, 00:23:44.524 "data_offset": 2048, 00:23:44.524 "data_size": 63488 00:23:44.524 }, 00:23:44.524 { 00:23:44.524 "name": null, 00:23:44.524 "uuid": "09396314-fb62-5b23-a627-a716aeffe387", 00:23:44.524 "is_configured": false, 00:23:44.524 "data_offset": 2048, 00:23:44.524 "data_size": 63488 00:23:44.524 }, 00:23:44.524 { 00:23:44.524 "name": null, 00:23:44.524 "uuid": "d21785f4-f6ae-5048-86d4-56e1c8050826", 00:23:44.524 "is_configured": false, 00:23:44.524 "data_offset": 2048, 00:23:44.524 "data_size": 63488 00:23:44.524 } 00:23:44.524 ] 00:23:44.524 }' 00:23:44.524 17:02:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:44.524 17:02:33 -- common/autotest_common.sh@10 -- # set +x 00:23:45.090 17:02:33 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:23:45.090 17:02:33 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:45.348 [2024-11-05 17:02:34.136333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:45.348 [2024-11-05 17:02:34.136528] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.348 [2024-11-05 17:02:34.136691] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:45.348 [2024-11-05 17:02:34.136809] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.348 [2024-11-05 17:02:34.137299] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.348 [2024-11-05 17:02:34.137452] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:45.348 [2024-11-05 17:02:34.137649] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:45.348 [2024-11-05 17:02:34.137773] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:45.348 pt2 00:23:45.348 17:02:34 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:45.606 [2024-11-05 17:02:34.384404] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:45.606 17:02:34 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:45.606 17:02:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:45.606 17:02:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:45.606 17:02:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:45.606 17:02:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:45.606 17:02:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:45.606 17:02:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:45.606 17:02:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:45.606 17:02:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:45.606 17:02:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:45.606 17:02:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.606 17:02:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.864 17:02:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:45.864 "name": "raid_bdev1", 00:23:45.864 "uuid": "714bf854-67a7-4e59-a884-602a78fab7fa", 00:23:45.864 "strip_size_kb": 64, 00:23:45.864 "state": "configuring", 00:23:45.864 "raid_level": "raid5f", 00:23:45.864 "superblock": true, 00:23:45.864 "num_base_bdevs": 3, 00:23:45.864 "num_base_bdevs_discovered": 1, 00:23:45.864 "num_base_bdevs_operational": 3, 00:23:45.864 "base_bdevs_list": [ 00:23:45.864 { 00:23:45.864 "name": "pt1", 00:23:45.864 "uuid": "a48adccd-76fa-5421-9963-e9cad1367fd2", 00:23:45.864 "is_configured": true, 00:23:45.864 "data_offset": 2048, 00:23:45.864 "data_size": 63488 00:23:45.864 }, 00:23:45.864 { 00:23:45.864 "name": null, 00:23:45.864 "uuid": "09396314-fb62-5b23-a627-a716aeffe387", 00:23:45.864 "is_configured": false, 00:23:45.864 "data_offset": 2048, 00:23:45.864 "data_size": 63488 00:23:45.864 }, 00:23:45.864 { 00:23:45.864 "name": null, 00:23:45.864 "uuid": "d21785f4-f6ae-5048-86d4-56e1c8050826", 00:23:45.864 "is_configured": false, 00:23:45.864 "data_offset": 2048, 00:23:45.864 "data_size": 63488 00:23:45.864 } 00:23:45.864 ] 00:23:45.864 }' 00:23:45.864 17:02:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:45.864 17:02:34 -- common/autotest_common.sh@10 -- # set +x 00:23:46.430 17:02:35 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:23:46.430 17:02:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:46.430 17:02:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:46.687 [2024-11-05 17:02:35.352543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:46.687 [2024-11-05 17:02:35.352774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:46.687 [2024-11-05 17:02:35.352846] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:46.687 [2024-11-05 17:02:35.353005] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:46.687 [2024-11-05 17:02:35.353523] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:46.687 [2024-11-05 17:02:35.353690] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:46.687 [2024-11-05 17:02:35.353884] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:46.687 [2024-11-05 17:02:35.353998] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:46.687 pt2 00:23:46.687 17:02:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:46.687 17:02:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:46.687 17:02:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:46.945 [2024-11-05 17:02:35.608594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:46.945 [2024-11-05 17:02:35.608828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:46.945 [2024-11-05 17:02:35.608996] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:46.945 [2024-11-05 17:02:35.609116] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:46.945 [2024-11-05 17:02:35.609618] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:46.945 [2024-11-05 17:02:35.609798] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:46.945 [2024-11-05 17:02:35.610009] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:46.945 [2024-11-05 17:02:35.610137] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:46.945 [2024-11-05 17:02:35.610373] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:23:46.945 [2024-11-05 17:02:35.610489] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:46.945 [2024-11-05 17:02:35.610625] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:46.945 [2024-11-05 17:02:35.614743] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:23:46.945 [2024-11-05 17:02:35.614937] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:23:46.945 [2024-11-05 17:02:35.615242] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.945 pt3 00:23:46.945 17:02:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:46.945 17:02:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:46.945 17:02:35 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:46.945 17:02:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:46.945 17:02:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:46.945 17:02:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:46.945 17:02:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:46.945 17:02:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:46.945 17:02:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:46.945 17:02:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:46.945 17:02:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:46.945 17:02:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:46.945 17:02:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.945 17:02:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.203 17:02:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:47.203 "name": "raid_bdev1", 00:23:47.203 "uuid": "714bf854-67a7-4e59-a884-602a78fab7fa", 00:23:47.203 "strip_size_kb": 64, 00:23:47.203 "state": "online", 00:23:47.203 "raid_level": "raid5f", 00:23:47.203 "superblock": true, 00:23:47.203 "num_base_bdevs": 3, 00:23:47.203 "num_base_bdevs_discovered": 3, 00:23:47.203 "num_base_bdevs_operational": 3, 00:23:47.204 "base_bdevs_list": [ 00:23:47.204 { 00:23:47.204 "name": "pt1", 00:23:47.204 "uuid": "a48adccd-76fa-5421-9963-e9cad1367fd2", 00:23:47.204 "is_configured": true, 00:23:47.204 "data_offset": 2048, 00:23:47.204 "data_size": 63488 00:23:47.204 }, 00:23:47.204 { 00:23:47.204 "name": "pt2", 00:23:47.204 "uuid": "09396314-fb62-5b23-a627-a716aeffe387", 00:23:47.204 "is_configured": true, 00:23:47.204 "data_offset": 2048, 00:23:47.204 "data_size": 63488 00:23:47.204 }, 00:23:47.204 { 00:23:47.204 "name": "pt3", 00:23:47.204 "uuid": "d21785f4-f6ae-5048-86d4-56e1c8050826", 00:23:47.204 "is_configured": true, 00:23:47.204 "data_offset": 2048, 00:23:47.204 "data_size": 63488 00:23:47.204 } 00:23:47.204 ] 00:23:47.204 }' 00:23:47.204 17:02:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:47.204 17:02:35 -- common/autotest_common.sh@10 -- # set +x 00:23:47.769 17:02:36 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:47.769 17:02:36 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:23:47.769 [2024-11-05 17:02:36.664572] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:48.027 17:02:36 -- bdev/bdev_raid.sh@430 -- # '[' 714bf854-67a7-4e59-a884-602a78fab7fa '!=' 714bf854-67a7-4e59-a884-602a78fab7fa ']' 00:23:48.027 17:02:36 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:23:48.027 17:02:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:48.027 17:02:36 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:48.027 17:02:36 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:48.285 [2024-11-05 17:02:36.932533] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:48.285 17:02:36 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:48.285 17:02:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:48.285 17:02:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:48.285 17:02:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:48.285 17:02:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:48.285 17:02:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:48.285 17:02:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:48.285 17:02:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:48.285 17:02:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:48.285 17:02:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:48.285 17:02:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.285 17:02:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.285 17:02:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:48.285 "name": "raid_bdev1", 00:23:48.285 "uuid": "714bf854-67a7-4e59-a884-602a78fab7fa", 00:23:48.285 "strip_size_kb": 64, 00:23:48.285 "state": "online", 00:23:48.285 "raid_level": "raid5f", 00:23:48.285 "superblock": true, 00:23:48.285 "num_base_bdevs": 3, 00:23:48.285 "num_base_bdevs_discovered": 2, 00:23:48.285 "num_base_bdevs_operational": 2, 00:23:48.285 "base_bdevs_list": [ 00:23:48.285 { 00:23:48.285 "name": null, 00:23:48.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.285 "is_configured": false, 00:23:48.285 "data_offset": 2048, 00:23:48.285 "data_size": 63488 00:23:48.285 }, 00:23:48.285 { 00:23:48.285 "name": "pt2", 00:23:48.285 "uuid": "09396314-fb62-5b23-a627-a716aeffe387", 00:23:48.285 "is_configured": true, 00:23:48.285 "data_offset": 2048, 00:23:48.285 "data_size": 63488 00:23:48.285 }, 00:23:48.285 { 00:23:48.285 "name": "pt3", 00:23:48.285 "uuid": "d21785f4-f6ae-5048-86d4-56e1c8050826", 00:23:48.285 "is_configured": true, 00:23:48.285 "data_offset": 2048, 00:23:48.285 "data_size": 63488 00:23:48.285 } 00:23:48.285 ] 00:23:48.285 }' 00:23:48.285 17:02:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:48.285 17:02:37 -- common/autotest_common.sh@10 -- # set +x 00:23:49.219 17:02:37 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:49.219 [2024-11-05 17:02:38.020757] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:49.219 [2024-11-05 17:02:38.020904] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:49.219 [2024-11-05 17:02:38.021061] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:49.219 [2024-11-05 17:02:38.021260] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:49.219 [2024-11-05 17:02:38.021374] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:23:49.219 17:02:38 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.219 17:02:38 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:23:49.477 17:02:38 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:23:49.477 17:02:38 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:23:49.477 17:02:38 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:23:49.477 17:02:38 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:49.477 17:02:38 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:49.735 17:02:38 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:49.735 17:02:38 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:49.735 17:02:38 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:49.993 17:02:38 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:49.993 17:02:38 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:49.993 17:02:38 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:23:49.993 17:02:38 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:49.993 17:02:38 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:50.251 [2024-11-05 17:02:38.968913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:50.251 [2024-11-05 17:02:38.969156] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.251 [2024-11-05 17:02:38.969234] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:50.251 [2024-11-05 17:02:38.969443] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.251 [2024-11-05 17:02:38.971640] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.251 [2024-11-05 17:02:38.971828] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:50.251 [2024-11-05 17:02:38.972043] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:50.251 [2024-11-05 17:02:38.972209] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:50.251 pt2 00:23:50.251 17:02:38 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:50.251 17:02:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:50.251 17:02:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:50.251 17:02:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:50.251 17:02:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:50.251 17:02:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:50.251 17:02:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:50.251 17:02:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:50.251 17:02:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:50.251 17:02:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:50.251 17:02:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.251 17:02:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.509 17:02:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:50.509 "name": "raid_bdev1", 00:23:50.509 "uuid": "714bf854-67a7-4e59-a884-602a78fab7fa", 00:23:50.509 "strip_size_kb": 64, 00:23:50.509 "state": "configuring", 00:23:50.509 "raid_level": "raid5f", 00:23:50.509 "superblock": true, 00:23:50.509 "num_base_bdevs": 3, 00:23:50.509 "num_base_bdevs_discovered": 1, 00:23:50.509 "num_base_bdevs_operational": 2, 00:23:50.509 "base_bdevs_list": [ 00:23:50.509 { 00:23:50.509 "name": null, 00:23:50.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.509 "is_configured": false, 00:23:50.509 "data_offset": 2048, 00:23:50.509 "data_size": 63488 00:23:50.509 }, 00:23:50.509 { 00:23:50.509 "name": "pt2", 00:23:50.509 "uuid": "09396314-fb62-5b23-a627-a716aeffe387", 00:23:50.509 "is_configured": true, 00:23:50.509 "data_offset": 2048, 00:23:50.509 "data_size": 63488 00:23:50.509 }, 00:23:50.509 { 00:23:50.509 "name": null, 00:23:50.509 "uuid": "d21785f4-f6ae-5048-86d4-56e1c8050826", 00:23:50.509 "is_configured": false, 00:23:50.509 "data_offset": 2048, 00:23:50.509 "data_size": 63488 00:23:50.509 } 00:23:50.509 ] 00:23:50.509 }' 00:23:50.509 17:02:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:50.509 17:02:39 -- common/autotest_common.sh@10 -- # set +x 00:23:51.076 17:02:39 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:51.076 17:02:39 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:51.076 17:02:39 -- bdev/bdev_raid.sh@462 -- # i=2 00:23:51.076 17:02:39 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:51.334 [2024-11-05 17:02:40.061308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:51.334 [2024-11-05 17:02:40.061603] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.334 [2024-11-05 17:02:40.061810] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:51.334 [2024-11-05 17:02:40.061965] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.334 [2024-11-05 17:02:40.062781] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.334 [2024-11-05 17:02:40.063007] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:51.334 [2024-11-05 17:02:40.063335] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:51.334 [2024-11-05 17:02:40.063522] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:51.334 [2024-11-05 17:02:40.063842] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:23:51.334 [2024-11-05 17:02:40.063995] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:51.334 [2024-11-05 17:02:40.064231] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:51.334 [2024-11-05 17:02:40.070801] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:23:51.334 [2024-11-05 17:02:40.070978] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:23:51.334 [2024-11-05 17:02:40.071462] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.334 pt3 00:23:51.334 17:02:40 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:51.334 17:02:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:51.334 17:02:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:51.334 17:02:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:51.334 17:02:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:51.334 17:02:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:51.334 17:02:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:51.334 17:02:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:51.334 17:02:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:51.334 17:02:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:51.334 17:02:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.334 17:02:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.592 17:02:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:51.592 "name": "raid_bdev1", 00:23:51.592 "uuid": "714bf854-67a7-4e59-a884-602a78fab7fa", 00:23:51.592 "strip_size_kb": 64, 00:23:51.592 "state": "online", 00:23:51.592 "raid_level": "raid5f", 00:23:51.592 "superblock": true, 00:23:51.592 "num_base_bdevs": 3, 00:23:51.592 "num_base_bdevs_discovered": 2, 00:23:51.592 "num_base_bdevs_operational": 2, 00:23:51.592 "base_bdevs_list": [ 00:23:51.592 { 00:23:51.592 "name": null, 00:23:51.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.592 "is_configured": false, 00:23:51.592 "data_offset": 2048, 00:23:51.592 "data_size": 63488 00:23:51.592 }, 00:23:51.592 { 00:23:51.592 "name": "pt2", 00:23:51.592 "uuid": "09396314-fb62-5b23-a627-a716aeffe387", 00:23:51.592 "is_configured": true, 00:23:51.592 "data_offset": 2048, 00:23:51.592 "data_size": 63488 00:23:51.592 }, 00:23:51.592 { 00:23:51.592 "name": "pt3", 00:23:51.592 "uuid": "d21785f4-f6ae-5048-86d4-56e1c8050826", 00:23:51.592 "is_configured": true, 00:23:51.592 "data_offset": 2048, 00:23:51.592 "data_size": 63488 00:23:51.592 } 00:23:51.592 ] 00:23:51.592 }' 00:23:51.592 17:02:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:51.592 17:02:40 -- common/autotest_common.sh@10 -- # set +x 00:23:52.197 17:02:40 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:23:52.197 17:02:40 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:52.455 [2024-11-05 17:02:41.167082] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:52.455 [2024-11-05 17:02:41.167281] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:52.455 [2024-11-05 17:02:41.167443] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:52.455 [2024-11-05 17:02:41.167604] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:52.455 [2024-11-05 17:02:41.167716] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:23:52.455 17:02:41 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.455 17:02:41 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:23:52.714 17:02:41 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:23:52.714 17:02:41 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:23:52.714 17:02:41 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:52.714 [2024-11-05 17:02:41.599169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:52.714 [2024-11-05 17:02:41.599383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:52.714 [2024-11-05 17:02:41.599462] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:52.714 [2024-11-05 17:02:41.599627] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:52.714 [2024-11-05 17:02:41.602121] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:52.714 [2024-11-05 17:02:41.602288] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:52.714 [2024-11-05 17:02:41.602544] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:52.714 [2024-11-05 17:02:41.602698] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:52.714 pt1 00:23:52.972 17:02:41 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:52.973 17:02:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:52.973 17:02:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:52.973 17:02:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:52.973 17:02:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:52.973 17:02:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:52.973 17:02:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:52.973 17:02:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:52.973 17:02:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:52.973 17:02:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:52.973 17:02:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.973 17:02:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.973 17:02:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:52.973 "name": "raid_bdev1", 00:23:52.973 "uuid": "714bf854-67a7-4e59-a884-602a78fab7fa", 00:23:52.973 "strip_size_kb": 64, 00:23:52.973 "state": "configuring", 00:23:52.973 "raid_level": "raid5f", 00:23:52.973 "superblock": true, 00:23:52.973 "num_base_bdevs": 3, 00:23:52.973 "num_base_bdevs_discovered": 1, 00:23:52.973 "num_base_bdevs_operational": 3, 00:23:52.973 "base_bdevs_list": [ 00:23:52.973 { 00:23:52.973 "name": "pt1", 00:23:52.973 "uuid": "a48adccd-76fa-5421-9963-e9cad1367fd2", 00:23:52.973 "is_configured": true, 00:23:52.973 "data_offset": 2048, 00:23:52.973 "data_size": 63488 00:23:52.973 }, 00:23:52.973 { 00:23:52.973 "name": null, 00:23:52.973 "uuid": "09396314-fb62-5b23-a627-a716aeffe387", 00:23:52.973 "is_configured": false, 00:23:52.973 "data_offset": 2048, 00:23:52.973 "data_size": 63488 00:23:52.973 }, 00:23:52.973 { 00:23:52.973 "name": null, 00:23:52.973 "uuid": "d21785f4-f6ae-5048-86d4-56e1c8050826", 00:23:52.973 "is_configured": false, 00:23:52.973 "data_offset": 2048, 00:23:52.973 "data_size": 63488 00:23:52.973 } 00:23:52.973 ] 00:23:52.973 }' 00:23:52.973 17:02:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:52.973 17:02:41 -- common/autotest_common.sh@10 -- # set +x 00:23:53.909 17:02:42 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:23:53.909 17:02:42 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:53.909 17:02:42 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:53.909 17:02:42 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:53.909 17:02:42 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:53.909 17:02:42 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:54.167 17:02:42 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:54.167 17:02:42 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:54.167 17:02:42 -- bdev/bdev_raid.sh@489 -- # i=2 00:23:54.167 17:02:42 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:54.426 [2024-11-05 17:02:43.131520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:54.426 [2024-11-05 17:02:43.131997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:54.426 [2024-11-05 17:02:43.132342] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:54.426 [2024-11-05 17:02:43.132606] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:54.426 [2024-11-05 17:02:43.134047] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:54.426 [2024-11-05 17:02:43.134385] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:54.426 [2024-11-05 17:02:43.134947] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:54.426 [2024-11-05 17:02:43.135202] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:54.426 [2024-11-05 17:02:43.135451] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:54.426 [2024-11-05 17:02:43.135718] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state configuring 00:23:54.426 [2024-11-05 17:02:43.136074] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:54.426 pt3 00:23:54.426 17:02:43 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:54.426 17:02:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:54.426 17:02:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:54.426 17:02:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:54.426 17:02:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:54.426 17:02:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:54.426 17:02:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:54.426 17:02:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:54.426 17:02:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:54.426 17:02:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:54.426 17:02:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.426 17:02:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.685 17:02:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:54.685 "name": "raid_bdev1", 00:23:54.685 "uuid": "714bf854-67a7-4e59-a884-602a78fab7fa", 00:23:54.685 "strip_size_kb": 64, 00:23:54.685 "state": "configuring", 00:23:54.685 "raid_level": "raid5f", 00:23:54.685 "superblock": true, 00:23:54.685 "num_base_bdevs": 3, 00:23:54.685 "num_base_bdevs_discovered": 1, 00:23:54.685 "num_base_bdevs_operational": 2, 00:23:54.685 "base_bdevs_list": [ 00:23:54.685 { 00:23:54.685 "name": null, 00:23:54.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.685 "is_configured": false, 00:23:54.685 "data_offset": 2048, 00:23:54.685 "data_size": 63488 00:23:54.685 }, 00:23:54.685 { 00:23:54.685 "name": null, 00:23:54.685 "uuid": "09396314-fb62-5b23-a627-a716aeffe387", 00:23:54.685 "is_configured": false, 00:23:54.685 "data_offset": 2048, 00:23:54.685 "data_size": 63488 00:23:54.685 }, 00:23:54.685 { 00:23:54.685 "name": "pt3", 00:23:54.685 "uuid": "d21785f4-f6ae-5048-86d4-56e1c8050826", 00:23:54.685 "is_configured": true, 00:23:54.685 "data_offset": 2048, 00:23:54.685 "data_size": 63488 00:23:54.685 } 00:23:54.685 ] 00:23:54.685 }' 00:23:54.685 17:02:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:54.685 17:02:43 -- common/autotest_common.sh@10 -- # set +x 00:23:55.252 17:02:43 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:23:55.252 17:02:43 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:55.252 17:02:43 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:55.253 [2024-11-05 17:02:44.096731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:55.253 [2024-11-05 17:02:44.097011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.253 [2024-11-05 17:02:44.097084] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:23:55.253 [2024-11-05 17:02:44.097323] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.253 [2024-11-05 17:02:44.097910] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.253 [2024-11-05 17:02:44.098083] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:55.253 [2024-11-05 17:02:44.098282] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:55.253 [2024-11-05 17:02:44.098435] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:55.253 [2024-11-05 17:02:44.098659] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:23:55.253 [2024-11-05 17:02:44.098778] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:55.253 [2024-11-05 17:02:44.098929] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:23:55.253 [2024-11-05 17:02:44.103103] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:23:55.253 [2024-11-05 17:02:44.103245] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:23:55.253 [2024-11-05 17:02:44.103585] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.253 pt2 00:23:55.253 17:02:44 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:55.253 17:02:44 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:55.253 17:02:44 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:55.253 17:02:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:55.253 17:02:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:55.253 17:02:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:55.253 17:02:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:55.253 17:02:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:55.253 17:02:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:55.253 17:02:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:55.253 17:02:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:55.253 17:02:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:55.253 17:02:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.253 17:02:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.511 17:02:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:55.511 "name": "raid_bdev1", 00:23:55.511 "uuid": "714bf854-67a7-4e59-a884-602a78fab7fa", 00:23:55.511 "strip_size_kb": 64, 00:23:55.511 "state": "online", 00:23:55.511 "raid_level": "raid5f", 00:23:55.511 "superblock": true, 00:23:55.511 "num_base_bdevs": 3, 00:23:55.511 "num_base_bdevs_discovered": 2, 00:23:55.511 "num_base_bdevs_operational": 2, 00:23:55.511 "base_bdevs_list": [ 00:23:55.511 { 00:23:55.511 "name": null, 00:23:55.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.511 "is_configured": false, 00:23:55.511 "data_offset": 2048, 00:23:55.511 "data_size": 63488 00:23:55.511 }, 00:23:55.511 { 00:23:55.511 "name": "pt2", 00:23:55.511 "uuid": "09396314-fb62-5b23-a627-a716aeffe387", 00:23:55.511 "is_configured": true, 00:23:55.511 "data_offset": 2048, 00:23:55.511 "data_size": 63488 00:23:55.511 }, 00:23:55.511 { 00:23:55.511 "name": "pt3", 00:23:55.511 "uuid": "d21785f4-f6ae-5048-86d4-56e1c8050826", 00:23:55.512 "is_configured": true, 00:23:55.512 "data_offset": 2048, 00:23:55.512 "data_size": 63488 00:23:55.512 } 00:23:55.512 ] 00:23:55.512 }' 00:23:55.512 17:02:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:55.512 17:02:44 -- common/autotest_common.sh@10 -- # set +x 00:23:56.079 17:02:44 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:56.079 17:02:44 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:23:56.338 [2024-11-05 17:02:45.124846] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:56.338 17:02:45 -- bdev/bdev_raid.sh@506 -- # '[' 714bf854-67a7-4e59-a884-602a78fab7fa '!=' 714bf854-67a7-4e59-a884-602a78fab7fa ']' 00:23:56.338 17:02:45 -- bdev/bdev_raid.sh@511 -- # killprocess 127730 00:23:56.338 17:02:45 -- common/autotest_common.sh@936 -- # '[' -z 127730 ']' 00:23:56.338 17:02:45 -- common/autotest_common.sh@940 -- # kill -0 127730 00:23:56.338 17:02:45 -- common/autotest_common.sh@941 -- # uname 00:23:56.338 17:02:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:56.338 17:02:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127730 00:23:56.338 killing process with pid 127730 00:23:56.338 17:02:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:56.338 17:02:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:56.338 17:02:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127730' 00:23:56.338 17:02:45 -- common/autotest_common.sh@955 -- # kill 127730 00:23:56.338 [2024-11-05 17:02:45.163405] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:56.338 17:02:45 -- common/autotest_common.sh@960 -- # wait 127730 00:23:56.338 [2024-11-05 17:02:45.163476] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:56.338 [2024-11-05 17:02:45.163533] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:56.338 [2024-11-05 17:02:45.163543] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:23:56.596 [2024-11-05 17:02:45.354399] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:57.531 ************************************ 00:23:57.531 END TEST raid5f_superblock_test 00:23:57.531 ************************************ 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@513 -- # return 0 00:23:57.531 00:23:57.531 real 0m18.913s 00:23:57.531 user 0m34.733s 00:23:57.531 sys 0m2.206s 00:23:57.531 17:02:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:57.531 17:02:46 -- common/autotest_common.sh@10 -- # set +x 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:23:57.531 17:02:46 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:23:57.531 17:02:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:57.531 17:02:46 -- common/autotest_common.sh@10 -- # set +x 00:23:57.531 ************************************ 00:23:57.531 START TEST raid5f_rebuild_test 00:23:57.531 ************************************ 00:23:57.531 17:02:46 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 false false 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:57.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:57.531 17:02:46 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:57.532 17:02:46 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:57.532 17:02:46 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:57.532 17:02:46 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:57.532 17:02:46 -- bdev/bdev_raid.sh@544 -- # raid_pid=128334 00:23:57.532 17:02:46 -- bdev/bdev_raid.sh@545 -- # waitforlisten 128334 /var/tmp/spdk-raid.sock 00:23:57.532 17:02:46 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:57.532 17:02:46 -- common/autotest_common.sh@829 -- # '[' -z 128334 ']' 00:23:57.532 17:02:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:57.532 17:02:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.532 17:02:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:57.532 17:02:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.532 17:02:46 -- common/autotest_common.sh@10 -- # set +x 00:23:57.532 [2024-11-05 17:02:46.398683] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:57.532 [2024-11-05 17:02:46.399096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128334 ] 00:23:57.532 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:57.532 Zero copy mechanism will not be used. 00:23:57.790 [2024-11-05 17:02:46.569407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.048 [2024-11-05 17:02:46.728962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.048 [2024-11-05 17:02:46.899317] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:58.615 17:02:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.615 17:02:47 -- common/autotest_common.sh@862 -- # return 0 00:23:58.615 17:02:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:58.615 17:02:47 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:58.615 17:02:47 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:58.874 BaseBdev1 00:23:58.874 17:02:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:58.874 17:02:47 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:58.874 17:02:47 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:59.132 BaseBdev2 00:23:59.132 17:02:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:59.132 17:02:47 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:59.132 17:02:47 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:59.391 BaseBdev3 00:23:59.391 17:02:48 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:59.649 spare_malloc 00:23:59.649 17:02:48 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:59.908 spare_delay 00:23:59.908 17:02:48 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:00.166 [2024-11-05 17:02:48.862536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:00.166 [2024-11-05 17:02:48.862749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.166 [2024-11-05 17:02:48.862822] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:24:00.166 [2024-11-05 17:02:48.863083] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.166 [2024-11-05 17:02:48.865341] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.166 [2024-11-05 17:02:48.865511] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:00.166 spare 00:24:00.166 17:02:48 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:24:00.166 [2024-11-05 17:02:49.050609] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:00.166 [2024-11-05 17:02:49.052498] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:00.166 [2024-11-05 17:02:49.052704] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:00.166 [2024-11-05 17:02:49.052901] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:24:00.166 [2024-11-05 17:02:49.053011] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:00.166 [2024-11-05 17:02:49.053208] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:24:00.166 [2024-11-05 17:02:49.057754] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:24:00.166 [2024-11-05 17:02:49.057894] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:24:00.166 [2024-11-05 17:02:49.058174] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:00.425 17:02:49 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:00.425 17:02:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:00.425 17:02:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:00.425 17:02:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:00.425 17:02:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:00.425 17:02:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:00.425 17:02:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:00.425 17:02:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:00.425 17:02:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:00.425 17:02:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:00.425 17:02:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.425 17:02:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.425 17:02:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:00.425 "name": "raid_bdev1", 00:24:00.425 "uuid": "b903cf40-4507-4570-8f1b-80497dc4b913", 00:24:00.425 "strip_size_kb": 64, 00:24:00.425 "state": "online", 00:24:00.425 "raid_level": "raid5f", 00:24:00.425 "superblock": false, 00:24:00.425 "num_base_bdevs": 3, 00:24:00.425 "num_base_bdevs_discovered": 3, 00:24:00.425 "num_base_bdevs_operational": 3, 00:24:00.425 "base_bdevs_list": [ 00:24:00.425 { 00:24:00.425 "name": "BaseBdev1", 00:24:00.425 "uuid": "a25d05b7-efa1-4621-9264-d595a200f11e", 00:24:00.425 "is_configured": true, 00:24:00.425 "data_offset": 0, 00:24:00.425 "data_size": 65536 00:24:00.425 }, 00:24:00.425 { 00:24:00.425 "name": "BaseBdev2", 00:24:00.425 "uuid": "a7769992-b9a7-452d-bc6e-11299e77ecf6", 00:24:00.425 "is_configured": true, 00:24:00.425 "data_offset": 0, 00:24:00.425 "data_size": 65536 00:24:00.425 }, 00:24:00.425 { 00:24:00.425 "name": "BaseBdev3", 00:24:00.425 "uuid": "02fca7e1-4640-48ec-832e-5e7c069360d3", 00:24:00.425 "is_configured": true, 00:24:00.425 "data_offset": 0, 00:24:00.425 "data_size": 65536 00:24:00.425 } 00:24:00.425 ] 00:24:00.425 }' 00:24:00.425 17:02:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:00.425 17:02:49 -- common/autotest_common.sh@10 -- # set +x 00:24:00.996 17:02:49 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:00.996 17:02:49 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:01.255 [2024-11-05 17:02:50.011305] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:01.255 17:02:50 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:24:01.255 17:02:50 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.255 17:02:50 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:01.514 17:02:50 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:24:01.514 17:02:50 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:01.514 17:02:50 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:01.514 17:02:50 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:01.514 17:02:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:01.514 17:02:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:01.514 17:02:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:01.514 17:02:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:01.514 17:02:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:01.514 17:02:50 -- bdev/nbd_common.sh@12 -- # local i 00:24:01.514 17:02:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:01.514 17:02:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:01.514 17:02:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:01.773 [2024-11-05 17:02:50.423316] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:24:01.773 /dev/nbd0 00:24:01.773 17:02:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:01.773 17:02:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:01.773 17:02:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:01.773 17:02:50 -- common/autotest_common.sh@867 -- # local i 00:24:01.773 17:02:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:01.773 17:02:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:01.773 17:02:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:01.773 17:02:50 -- common/autotest_common.sh@871 -- # break 00:24:01.773 17:02:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:01.773 17:02:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:01.773 17:02:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:01.773 1+0 records in 00:24:01.773 1+0 records out 00:24:01.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651603 s, 6.3 MB/s 00:24:01.773 17:02:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:01.773 17:02:50 -- common/autotest_common.sh@884 -- # size=4096 00:24:01.773 17:02:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:01.773 17:02:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:01.773 17:02:50 -- common/autotest_common.sh@887 -- # return 0 00:24:01.773 17:02:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:01.773 17:02:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:01.773 17:02:50 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:01.773 17:02:50 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:24:01.773 17:02:50 -- bdev/bdev_raid.sh@582 -- # echo 128 00:24:01.773 17:02:50 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:24:02.032 512+0 records in 00:24:02.032 512+0 records out 00:24:02.032 67108864 bytes (67 MB, 64 MiB) copied, 0.35154 s, 191 MB/s 00:24:02.032 17:02:50 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:02.032 17:02:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:02.032 17:02:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:02.032 17:02:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:02.032 17:02:50 -- bdev/nbd_common.sh@51 -- # local i 00:24:02.032 17:02:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:02.032 17:02:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:02.290 17:02:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:02.291 17:02:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:02.291 17:02:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:02.291 17:02:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:02.291 17:02:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:02.291 17:02:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:02.291 [2024-11-05 17:02:51.101849] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:02.291 17:02:51 -- bdev/nbd_common.sh@41 -- # break 00:24:02.291 17:02:51 -- bdev/nbd_common.sh@45 -- # return 0 00:24:02.291 17:02:51 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:02.549 [2024-11-05 17:02:51.287926] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:02.549 17:02:51 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:02.549 17:02:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:02.549 17:02:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:02.549 17:02:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:02.549 17:02:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:02.549 17:02:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:02.549 17:02:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:02.549 17:02:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:02.549 17:02:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:02.549 17:02:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:02.549 17:02:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.549 17:02:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.808 17:02:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:02.808 "name": "raid_bdev1", 00:24:02.808 "uuid": "b903cf40-4507-4570-8f1b-80497dc4b913", 00:24:02.808 "strip_size_kb": 64, 00:24:02.808 "state": "online", 00:24:02.808 "raid_level": "raid5f", 00:24:02.808 "superblock": false, 00:24:02.808 "num_base_bdevs": 3, 00:24:02.808 "num_base_bdevs_discovered": 2, 00:24:02.808 "num_base_bdevs_operational": 2, 00:24:02.808 "base_bdevs_list": [ 00:24:02.808 { 00:24:02.808 "name": null, 00:24:02.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.808 "is_configured": false, 00:24:02.808 "data_offset": 0, 00:24:02.808 "data_size": 65536 00:24:02.808 }, 00:24:02.808 { 00:24:02.808 "name": "BaseBdev2", 00:24:02.808 "uuid": "a7769992-b9a7-452d-bc6e-11299e77ecf6", 00:24:02.808 "is_configured": true, 00:24:02.808 "data_offset": 0, 00:24:02.808 "data_size": 65536 00:24:02.808 }, 00:24:02.808 { 00:24:02.808 "name": "BaseBdev3", 00:24:02.808 "uuid": "02fca7e1-4640-48ec-832e-5e7c069360d3", 00:24:02.808 "is_configured": true, 00:24:02.808 "data_offset": 0, 00:24:02.808 "data_size": 65536 00:24:02.808 } 00:24:02.808 ] 00:24:02.808 }' 00:24:02.808 17:02:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:02.808 17:02:51 -- common/autotest_common.sh@10 -- # set +x 00:24:03.375 17:02:52 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:03.634 [2024-11-05 17:02:52.436161] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:03.634 [2024-11-05 17:02:52.436326] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:03.634 [2024-11-05 17:02:52.447411] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:24:03.634 [2024-11-05 17:02:52.453222] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:03.634 17:02:52 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:04.569 17:02:53 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.569 17:02:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:04.569 17:02:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:04.569 17:02:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:04.569 17:02:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:04.827 17:02:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.827 17:02:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.827 17:02:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:04.827 "name": "raid_bdev1", 00:24:04.827 "uuid": "b903cf40-4507-4570-8f1b-80497dc4b913", 00:24:04.827 "strip_size_kb": 64, 00:24:04.827 "state": "online", 00:24:04.827 "raid_level": "raid5f", 00:24:04.827 "superblock": false, 00:24:04.827 "num_base_bdevs": 3, 00:24:04.827 "num_base_bdevs_discovered": 3, 00:24:04.827 "num_base_bdevs_operational": 3, 00:24:04.827 "process": { 00:24:04.827 "type": "rebuild", 00:24:04.827 "target": "spare", 00:24:04.827 "progress": { 00:24:04.827 "blocks": 24576, 00:24:04.827 "percent": 18 00:24:04.827 } 00:24:04.827 }, 00:24:04.827 "base_bdevs_list": [ 00:24:04.827 { 00:24:04.827 "name": "spare", 00:24:04.827 "uuid": "7dce760b-13f1-5e43-9f67-f89f7829117f", 00:24:04.827 "is_configured": true, 00:24:04.827 "data_offset": 0, 00:24:04.827 "data_size": 65536 00:24:04.827 }, 00:24:04.827 { 00:24:04.827 "name": "BaseBdev2", 00:24:04.827 "uuid": "a7769992-b9a7-452d-bc6e-11299e77ecf6", 00:24:04.827 "is_configured": true, 00:24:04.827 "data_offset": 0, 00:24:04.827 "data_size": 65536 00:24:04.827 }, 00:24:04.827 { 00:24:04.827 "name": "BaseBdev3", 00:24:04.827 "uuid": "02fca7e1-4640-48ec-832e-5e7c069360d3", 00:24:04.827 "is_configured": true, 00:24:04.827 "data_offset": 0, 00:24:04.827 "data_size": 65536 00:24:04.827 } 00:24:04.827 ] 00:24:04.827 }' 00:24:04.827 17:02:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:05.086 17:02:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.086 17:02:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:05.086 17:02:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.086 17:02:53 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:05.344 [2024-11-05 17:02:54.026607] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:05.344 [2024-11-05 17:02:54.065778] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:05.344 [2024-11-05 17:02:54.066000] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.344 17:02:54 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:05.344 17:02:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:05.344 17:02:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:05.344 17:02:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:05.344 17:02:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:05.344 17:02:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:05.344 17:02:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:05.344 17:02:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:05.344 17:02:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:05.344 17:02:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:05.344 17:02:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.344 17:02:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.603 17:02:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:05.603 "name": "raid_bdev1", 00:24:05.603 "uuid": "b903cf40-4507-4570-8f1b-80497dc4b913", 00:24:05.603 "strip_size_kb": 64, 00:24:05.603 "state": "online", 00:24:05.603 "raid_level": "raid5f", 00:24:05.603 "superblock": false, 00:24:05.603 "num_base_bdevs": 3, 00:24:05.603 "num_base_bdevs_discovered": 2, 00:24:05.603 "num_base_bdevs_operational": 2, 00:24:05.603 "base_bdevs_list": [ 00:24:05.603 { 00:24:05.603 "name": null, 00:24:05.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.603 "is_configured": false, 00:24:05.603 "data_offset": 0, 00:24:05.603 "data_size": 65536 00:24:05.603 }, 00:24:05.603 { 00:24:05.603 "name": "BaseBdev2", 00:24:05.603 "uuid": "a7769992-b9a7-452d-bc6e-11299e77ecf6", 00:24:05.603 "is_configured": true, 00:24:05.603 "data_offset": 0, 00:24:05.603 "data_size": 65536 00:24:05.603 }, 00:24:05.603 { 00:24:05.603 "name": "BaseBdev3", 00:24:05.603 "uuid": "02fca7e1-4640-48ec-832e-5e7c069360d3", 00:24:05.603 "is_configured": true, 00:24:05.603 "data_offset": 0, 00:24:05.603 "data_size": 65536 00:24:05.603 } 00:24:05.603 ] 00:24:05.603 }' 00:24:05.603 17:02:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:05.603 17:02:54 -- common/autotest_common.sh@10 -- # set +x 00:24:06.169 17:02:54 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:06.169 17:02:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:06.169 17:02:54 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:06.169 17:02:54 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:06.169 17:02:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:06.169 17:02:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.169 17:02:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.426 17:02:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:06.426 "name": "raid_bdev1", 00:24:06.426 "uuid": "b903cf40-4507-4570-8f1b-80497dc4b913", 00:24:06.426 "strip_size_kb": 64, 00:24:06.426 "state": "online", 00:24:06.426 "raid_level": "raid5f", 00:24:06.426 "superblock": false, 00:24:06.426 "num_base_bdevs": 3, 00:24:06.426 "num_base_bdevs_discovered": 2, 00:24:06.426 "num_base_bdevs_operational": 2, 00:24:06.426 "base_bdevs_list": [ 00:24:06.426 { 00:24:06.426 "name": null, 00:24:06.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.426 "is_configured": false, 00:24:06.426 "data_offset": 0, 00:24:06.426 "data_size": 65536 00:24:06.427 }, 00:24:06.427 { 00:24:06.427 "name": "BaseBdev2", 00:24:06.427 "uuid": "a7769992-b9a7-452d-bc6e-11299e77ecf6", 00:24:06.427 "is_configured": true, 00:24:06.427 "data_offset": 0, 00:24:06.427 "data_size": 65536 00:24:06.427 }, 00:24:06.427 { 00:24:06.427 "name": "BaseBdev3", 00:24:06.427 "uuid": "02fca7e1-4640-48ec-832e-5e7c069360d3", 00:24:06.427 "is_configured": true, 00:24:06.427 "data_offset": 0, 00:24:06.427 "data_size": 65536 00:24:06.427 } 00:24:06.427 ] 00:24:06.427 }' 00:24:06.427 17:02:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:06.427 17:02:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:06.427 17:02:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:06.427 17:02:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:06.427 17:02:55 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:06.684 [2024-11-05 17:02:55.538687] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:06.684 [2024-11-05 17:02:55.538861] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:06.684 [2024-11-05 17:02:55.548997] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:24:06.684 [2024-11-05 17:02:55.554556] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:06.684 17:02:55 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:08.056 17:02:56 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.056 17:02:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:08.056 17:02:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:08.056 17:02:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:08.056 17:02:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:08.057 "name": "raid_bdev1", 00:24:08.057 "uuid": "b903cf40-4507-4570-8f1b-80497dc4b913", 00:24:08.057 "strip_size_kb": 64, 00:24:08.057 "state": "online", 00:24:08.057 "raid_level": "raid5f", 00:24:08.057 "superblock": false, 00:24:08.057 "num_base_bdevs": 3, 00:24:08.057 "num_base_bdevs_discovered": 3, 00:24:08.057 "num_base_bdevs_operational": 3, 00:24:08.057 "process": { 00:24:08.057 "type": "rebuild", 00:24:08.057 "target": "spare", 00:24:08.057 "progress": { 00:24:08.057 "blocks": 22528, 00:24:08.057 "percent": 17 00:24:08.057 } 00:24:08.057 }, 00:24:08.057 "base_bdevs_list": [ 00:24:08.057 { 00:24:08.057 "name": "spare", 00:24:08.057 "uuid": "7dce760b-13f1-5e43-9f67-f89f7829117f", 00:24:08.057 "is_configured": true, 00:24:08.057 "data_offset": 0, 00:24:08.057 "data_size": 65536 00:24:08.057 }, 00:24:08.057 { 00:24:08.057 "name": "BaseBdev2", 00:24:08.057 "uuid": "a7769992-b9a7-452d-bc6e-11299e77ecf6", 00:24:08.057 "is_configured": true, 00:24:08.057 "data_offset": 0, 00:24:08.057 "data_size": 65536 00:24:08.057 }, 00:24:08.057 { 00:24:08.057 "name": "BaseBdev3", 00:24:08.057 "uuid": "02fca7e1-4640-48ec-832e-5e7c069360d3", 00:24:08.057 "is_configured": true, 00:24:08.057 "data_offset": 0, 00:24:08.057 "data_size": 65536 00:24:08.057 } 00:24:08.057 ] 00:24:08.057 }' 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@657 -- # local timeout=616 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.057 17:02:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.315 17:02:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:08.315 "name": "raid_bdev1", 00:24:08.315 "uuid": "b903cf40-4507-4570-8f1b-80497dc4b913", 00:24:08.315 "strip_size_kb": 64, 00:24:08.315 "state": "online", 00:24:08.315 "raid_level": "raid5f", 00:24:08.315 "superblock": false, 00:24:08.315 "num_base_bdevs": 3, 00:24:08.315 "num_base_bdevs_discovered": 3, 00:24:08.315 "num_base_bdevs_operational": 3, 00:24:08.315 "process": { 00:24:08.315 "type": "rebuild", 00:24:08.315 "target": "spare", 00:24:08.315 "progress": { 00:24:08.315 "blocks": 30720, 00:24:08.315 "percent": 23 00:24:08.315 } 00:24:08.315 }, 00:24:08.315 "base_bdevs_list": [ 00:24:08.315 { 00:24:08.315 "name": "spare", 00:24:08.315 "uuid": "7dce760b-13f1-5e43-9f67-f89f7829117f", 00:24:08.315 "is_configured": true, 00:24:08.315 "data_offset": 0, 00:24:08.315 "data_size": 65536 00:24:08.315 }, 00:24:08.315 { 00:24:08.315 "name": "BaseBdev2", 00:24:08.315 "uuid": "a7769992-b9a7-452d-bc6e-11299e77ecf6", 00:24:08.315 "is_configured": true, 00:24:08.315 "data_offset": 0, 00:24:08.315 "data_size": 65536 00:24:08.315 }, 00:24:08.315 { 00:24:08.315 "name": "BaseBdev3", 00:24:08.315 "uuid": "02fca7e1-4640-48ec-832e-5e7c069360d3", 00:24:08.315 "is_configured": true, 00:24:08.315 "data_offset": 0, 00:24:08.315 "data_size": 65536 00:24:08.315 } 00:24:08.315 ] 00:24:08.315 }' 00:24:08.315 17:02:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:08.315 17:02:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:08.315 17:02:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:08.573 17:02:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:08.573 17:02:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:09.540 17:02:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:09.540 17:02:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:09.540 17:02:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:09.540 17:02:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:09.540 17:02:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:09.540 17:02:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:09.540 17:02:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.540 17:02:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.799 17:02:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:09.799 "name": "raid_bdev1", 00:24:09.799 "uuid": "b903cf40-4507-4570-8f1b-80497dc4b913", 00:24:09.799 "strip_size_kb": 64, 00:24:09.799 "state": "online", 00:24:09.799 "raid_level": "raid5f", 00:24:09.799 "superblock": false, 00:24:09.799 "num_base_bdevs": 3, 00:24:09.799 "num_base_bdevs_discovered": 3, 00:24:09.799 "num_base_bdevs_operational": 3, 00:24:09.799 "process": { 00:24:09.799 "type": "rebuild", 00:24:09.799 "target": "spare", 00:24:09.799 "progress": { 00:24:09.799 "blocks": 59392, 00:24:09.799 "percent": 45 00:24:09.799 } 00:24:09.799 }, 00:24:09.799 "base_bdevs_list": [ 00:24:09.799 { 00:24:09.799 "name": "spare", 00:24:09.799 "uuid": "7dce760b-13f1-5e43-9f67-f89f7829117f", 00:24:09.799 "is_configured": true, 00:24:09.799 "data_offset": 0, 00:24:09.799 "data_size": 65536 00:24:09.799 }, 00:24:09.799 { 00:24:09.799 "name": "BaseBdev2", 00:24:09.799 "uuid": "a7769992-b9a7-452d-bc6e-11299e77ecf6", 00:24:09.799 "is_configured": true, 00:24:09.799 "data_offset": 0, 00:24:09.799 "data_size": 65536 00:24:09.799 }, 00:24:09.799 { 00:24:09.799 "name": "BaseBdev3", 00:24:09.799 "uuid": "02fca7e1-4640-48ec-832e-5e7c069360d3", 00:24:09.799 "is_configured": true, 00:24:09.799 "data_offset": 0, 00:24:09.799 "data_size": 65536 00:24:09.799 } 00:24:09.799 ] 00:24:09.799 }' 00:24:09.799 17:02:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:09.799 17:02:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:09.799 17:02:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:09.799 17:02:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:09.799 17:02:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:10.734 17:02:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:10.734 17:02:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:10.734 17:02:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:10.734 17:02:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:10.734 17:02:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:10.734 17:02:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:10.734 17:02:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.734 17:02:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.991 17:02:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:10.991 "name": "raid_bdev1", 00:24:10.991 "uuid": "b903cf40-4507-4570-8f1b-80497dc4b913", 00:24:10.991 "strip_size_kb": 64, 00:24:10.991 "state": "online", 00:24:10.991 "raid_level": "raid5f", 00:24:10.991 "superblock": false, 00:24:10.991 "num_base_bdevs": 3, 00:24:10.991 "num_base_bdevs_discovered": 3, 00:24:10.991 "num_base_bdevs_operational": 3, 00:24:10.991 "process": { 00:24:10.991 "type": "rebuild", 00:24:10.991 "target": "spare", 00:24:10.991 "progress": { 00:24:10.991 "blocks": 86016, 00:24:10.991 "percent": 65 00:24:10.991 } 00:24:10.991 }, 00:24:10.991 "base_bdevs_list": [ 00:24:10.991 { 00:24:10.991 "name": "spare", 00:24:10.991 "uuid": "7dce760b-13f1-5e43-9f67-f89f7829117f", 00:24:10.991 "is_configured": true, 00:24:10.991 "data_offset": 0, 00:24:10.991 "data_size": 65536 00:24:10.991 }, 00:24:10.991 { 00:24:10.991 "name": "BaseBdev2", 00:24:10.991 "uuid": "a7769992-b9a7-452d-bc6e-11299e77ecf6", 00:24:10.991 "is_configured": true, 00:24:10.991 "data_offset": 0, 00:24:10.991 "data_size": 65536 00:24:10.991 }, 00:24:10.991 { 00:24:10.991 "name": "BaseBdev3", 00:24:10.991 "uuid": "02fca7e1-4640-48ec-832e-5e7c069360d3", 00:24:10.991 "is_configured": true, 00:24:10.991 "data_offset": 0, 00:24:10.991 "data_size": 65536 00:24:10.991 } 00:24:10.991 ] 00:24:10.991 }' 00:24:10.991 17:02:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:10.991 17:02:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:10.991 17:02:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:11.249 17:02:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:11.249 17:02:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:12.183 17:03:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:12.183 17:03:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:12.183 17:03:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:12.183 17:03:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:12.183 17:03:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:12.183 17:03:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:12.183 17:03:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.183 17:03:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.442 17:03:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:12.442 "name": "raid_bdev1", 00:24:12.442 "uuid": "b903cf40-4507-4570-8f1b-80497dc4b913", 00:24:12.442 "strip_size_kb": 64, 00:24:12.442 "state": "online", 00:24:12.442 "raid_level": "raid5f", 00:24:12.442 "superblock": false, 00:24:12.442 "num_base_bdevs": 3, 00:24:12.442 "num_base_bdevs_discovered": 3, 00:24:12.442 "num_base_bdevs_operational": 3, 00:24:12.442 "process": { 00:24:12.442 "type": "rebuild", 00:24:12.442 "target": "spare", 00:24:12.442 "progress": { 00:24:12.442 "blocks": 112640, 00:24:12.442 "percent": 85 00:24:12.442 } 00:24:12.442 }, 00:24:12.442 "base_bdevs_list": [ 00:24:12.442 { 00:24:12.442 "name": "spare", 00:24:12.442 "uuid": "7dce760b-13f1-5e43-9f67-f89f7829117f", 00:24:12.442 "is_configured": true, 00:24:12.442 "data_offset": 0, 00:24:12.442 "data_size": 65536 00:24:12.442 }, 00:24:12.442 { 00:24:12.442 "name": "BaseBdev2", 00:24:12.442 "uuid": "a7769992-b9a7-452d-bc6e-11299e77ecf6", 00:24:12.442 "is_configured": true, 00:24:12.442 "data_offset": 0, 00:24:12.442 "data_size": 65536 00:24:12.442 }, 00:24:12.442 { 00:24:12.442 "name": "BaseBdev3", 00:24:12.442 "uuid": "02fca7e1-4640-48ec-832e-5e7c069360d3", 00:24:12.442 "is_configured": true, 00:24:12.442 "data_offset": 0, 00:24:12.442 "data_size": 65536 00:24:12.442 } 00:24:12.442 ] 00:24:12.442 }' 00:24:12.442 17:03:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:12.442 17:03:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:12.442 17:03:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:12.442 17:03:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:12.442 17:03:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:13.377 [2024-11-05 17:03:02.007881] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:13.377 [2024-11-05 17:03:02.008123] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:13.377 [2024-11-05 17:03:02.008326] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:13.635 17:03:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:13.635 17:03:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:13.635 17:03:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:13.635 17:03:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:13.635 17:03:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:13.635 17:03:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:13.635 17:03:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.635 17:03:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.893 17:03:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:13.893 "name": "raid_bdev1", 00:24:13.893 "uuid": "b903cf40-4507-4570-8f1b-80497dc4b913", 00:24:13.893 "strip_size_kb": 64, 00:24:13.893 "state": "online", 00:24:13.893 "raid_level": "raid5f", 00:24:13.893 "superblock": false, 00:24:13.893 "num_base_bdevs": 3, 00:24:13.893 "num_base_bdevs_discovered": 3, 00:24:13.893 "num_base_bdevs_operational": 3, 00:24:13.893 "base_bdevs_list": [ 00:24:13.893 { 00:24:13.893 "name": "spare", 00:24:13.893 "uuid": "7dce760b-13f1-5e43-9f67-f89f7829117f", 00:24:13.893 "is_configured": true, 00:24:13.893 "data_offset": 0, 00:24:13.893 "data_size": 65536 00:24:13.893 }, 00:24:13.893 { 00:24:13.893 "name": "BaseBdev2", 00:24:13.893 "uuid": "a7769992-b9a7-452d-bc6e-11299e77ecf6", 00:24:13.893 "is_configured": true, 00:24:13.893 "data_offset": 0, 00:24:13.893 "data_size": 65536 00:24:13.893 }, 00:24:13.893 { 00:24:13.893 "name": "BaseBdev3", 00:24:13.893 "uuid": "02fca7e1-4640-48ec-832e-5e7c069360d3", 00:24:13.893 "is_configured": true, 00:24:13.893 "data_offset": 0, 00:24:13.893 "data_size": 65536 00:24:13.893 } 00:24:13.893 ] 00:24:13.893 }' 00:24:13.893 17:03:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:13.893 17:03:02 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:13.893 17:03:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:13.893 17:03:02 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:13.893 17:03:02 -- bdev/bdev_raid.sh@660 -- # break 00:24:13.893 17:03:02 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:13.893 17:03:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:13.893 17:03:02 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:13.893 17:03:02 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:13.893 17:03:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:13.893 17:03:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.893 17:03:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.152 17:03:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:14.152 "name": "raid_bdev1", 00:24:14.152 "uuid": "b903cf40-4507-4570-8f1b-80497dc4b913", 00:24:14.152 "strip_size_kb": 64, 00:24:14.152 "state": "online", 00:24:14.152 "raid_level": "raid5f", 00:24:14.152 "superblock": false, 00:24:14.152 "num_base_bdevs": 3, 00:24:14.152 "num_base_bdevs_discovered": 3, 00:24:14.152 "num_base_bdevs_operational": 3, 00:24:14.152 "base_bdevs_list": [ 00:24:14.152 { 00:24:14.152 "name": "spare", 00:24:14.152 "uuid": "7dce760b-13f1-5e43-9f67-f89f7829117f", 00:24:14.152 "is_configured": true, 00:24:14.152 "data_offset": 0, 00:24:14.152 "data_size": 65536 00:24:14.152 }, 00:24:14.152 { 00:24:14.152 "name": "BaseBdev2", 00:24:14.152 "uuid": "a7769992-b9a7-452d-bc6e-11299e77ecf6", 00:24:14.152 "is_configured": true, 00:24:14.152 "data_offset": 0, 00:24:14.152 "data_size": 65536 00:24:14.152 }, 00:24:14.152 { 00:24:14.152 "name": "BaseBdev3", 00:24:14.152 "uuid": "02fca7e1-4640-48ec-832e-5e7c069360d3", 00:24:14.152 "is_configured": true, 00:24:14.152 "data_offset": 0, 00:24:14.152 "data_size": 65536 00:24:14.152 } 00:24:14.152 ] 00:24:14.152 }' 00:24:14.152 17:03:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:14.152 17:03:02 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:14.152 17:03:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:14.152 17:03:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:14.152 17:03:03 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:14.152 17:03:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:14.152 17:03:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:14.152 17:03:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:14.152 17:03:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:14.152 17:03:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:14.152 17:03:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:14.152 17:03:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:14.152 17:03:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:14.152 17:03:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:14.152 17:03:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.152 17:03:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.411 17:03:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:14.411 "name": "raid_bdev1", 00:24:14.411 "uuid": "b903cf40-4507-4570-8f1b-80497dc4b913", 00:24:14.411 "strip_size_kb": 64, 00:24:14.411 "state": "online", 00:24:14.411 "raid_level": "raid5f", 00:24:14.411 "superblock": false, 00:24:14.411 "num_base_bdevs": 3, 00:24:14.411 "num_base_bdevs_discovered": 3, 00:24:14.411 "num_base_bdevs_operational": 3, 00:24:14.411 "base_bdevs_list": [ 00:24:14.411 { 00:24:14.411 "name": "spare", 00:24:14.411 "uuid": "7dce760b-13f1-5e43-9f67-f89f7829117f", 00:24:14.411 "is_configured": true, 00:24:14.411 "data_offset": 0, 00:24:14.411 "data_size": 65536 00:24:14.411 }, 00:24:14.411 { 00:24:14.411 "name": "BaseBdev2", 00:24:14.411 "uuid": "a7769992-b9a7-452d-bc6e-11299e77ecf6", 00:24:14.411 "is_configured": true, 00:24:14.411 "data_offset": 0, 00:24:14.411 "data_size": 65536 00:24:14.411 }, 00:24:14.411 { 00:24:14.411 "name": "BaseBdev3", 00:24:14.411 "uuid": "02fca7e1-4640-48ec-832e-5e7c069360d3", 00:24:14.411 "is_configured": true, 00:24:14.411 "data_offset": 0, 00:24:14.411 "data_size": 65536 00:24:14.411 } 00:24:14.411 ] 00:24:14.411 }' 00:24:14.411 17:03:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:14.411 17:03:03 -- common/autotest_common.sh@10 -- # set +x 00:24:15.344 17:03:03 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:15.344 [2024-11-05 17:03:04.145659] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:15.345 [2024-11-05 17:03:04.145849] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:15.345 [2024-11-05 17:03:04.146032] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:15.345 [2024-11-05 17:03:04.146224] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:15.345 [2024-11-05 17:03:04.146329] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:24:15.345 17:03:04 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.345 17:03:04 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:15.603 17:03:04 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:15.603 17:03:04 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:15.603 17:03:04 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:15.603 17:03:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:15.603 17:03:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:15.603 17:03:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:15.603 17:03:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:15.603 17:03:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:15.603 17:03:04 -- bdev/nbd_common.sh@12 -- # local i 00:24:15.603 17:03:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:15.603 17:03:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:15.603 17:03:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:15.862 /dev/nbd0 00:24:15.862 17:03:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:15.862 17:03:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:15.862 17:03:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:15.862 17:03:04 -- common/autotest_common.sh@867 -- # local i 00:24:15.862 17:03:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:15.862 17:03:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:15.862 17:03:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:15.862 17:03:04 -- common/autotest_common.sh@871 -- # break 00:24:15.862 17:03:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:15.862 17:03:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:15.862 17:03:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:15.862 1+0 records in 00:24:15.862 1+0 records out 00:24:15.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516283 s, 7.9 MB/s 00:24:15.862 17:03:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:15.862 17:03:04 -- common/autotest_common.sh@884 -- # size=4096 00:24:15.862 17:03:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:15.862 17:03:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:15.862 17:03:04 -- common/autotest_common.sh@887 -- # return 0 00:24:15.862 17:03:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:15.862 17:03:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:15.862 17:03:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:16.121 /dev/nbd1 00:24:16.121 17:03:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:16.121 17:03:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:16.121 17:03:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:16.121 17:03:04 -- common/autotest_common.sh@867 -- # local i 00:24:16.121 17:03:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:16.121 17:03:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:16.121 17:03:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:16.121 17:03:04 -- common/autotest_common.sh@871 -- # break 00:24:16.121 17:03:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:16.121 17:03:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:16.121 17:03:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:16.121 1+0 records in 00:24:16.121 1+0 records out 00:24:16.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424782 s, 9.6 MB/s 00:24:16.121 17:03:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:16.121 17:03:04 -- common/autotest_common.sh@884 -- # size=4096 00:24:16.121 17:03:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:16.121 17:03:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:16.121 17:03:04 -- common/autotest_common.sh@887 -- # return 0 00:24:16.121 17:03:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:16.121 17:03:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:16.121 17:03:04 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:16.380 17:03:05 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:16.380 17:03:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:16.380 17:03:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:16.380 17:03:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:16.380 17:03:05 -- bdev/nbd_common.sh@51 -- # local i 00:24:16.380 17:03:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:16.380 17:03:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:16.638 17:03:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:16.638 17:03:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:16.638 17:03:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:16.638 17:03:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:16.638 17:03:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:16.638 17:03:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:16.638 17:03:05 -- bdev/nbd_common.sh@41 -- # break 00:24:16.638 17:03:05 -- bdev/nbd_common.sh@45 -- # return 0 00:24:16.638 17:03:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:16.638 17:03:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:16.896 17:03:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:16.896 17:03:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:16.896 17:03:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:16.896 17:03:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:16.896 17:03:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:16.896 17:03:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:16.896 17:03:05 -- bdev/nbd_common.sh@41 -- # break 00:24:16.896 17:03:05 -- bdev/nbd_common.sh@45 -- # return 0 00:24:16.896 17:03:05 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:24:16.896 17:03:05 -- bdev/bdev_raid.sh@709 -- # killprocess 128334 00:24:16.896 17:03:05 -- common/autotest_common.sh@936 -- # '[' -z 128334 ']' 00:24:16.896 17:03:05 -- common/autotest_common.sh@940 -- # kill -0 128334 00:24:16.896 17:03:05 -- common/autotest_common.sh@941 -- # uname 00:24:16.896 17:03:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:16.896 17:03:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128334 00:24:16.896 17:03:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:16.896 17:03:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:16.896 17:03:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128334' 00:24:16.896 killing process with pid 128334 00:24:16.896 17:03:05 -- common/autotest_common.sh@955 -- # kill 128334 00:24:16.896 Received shutdown signal, test time was about 60.000000 seconds 00:24:16.896 00:24:16.896 Latency(us) 00:24:16.896 [2024-11-05T17:03:05.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.896 [2024-11-05T17:03:05.773Z] =================================================================================================================== 00:24:16.896 [2024-11-05T17:03:05.773Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:16.896 17:03:05 -- common/autotest_common.sh@960 -- # wait 128334 00:24:16.896 [2024-11-05 17:03:05.645525] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:17.155 [2024-11-05 17:03:05.899628] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:18.090 ************************************ 00:24:18.090 END TEST raid5f_rebuild_test 00:24:18.090 ************************************ 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:18.090 00:24:18.090 real 0m20.503s 00:24:18.090 user 0m31.036s 00:24:18.090 sys 0m2.230s 00:24:18.090 17:03:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:18.090 17:03:06 -- common/autotest_common.sh@10 -- # set +x 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:24:18.090 17:03:06 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:24:18.090 17:03:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:18.090 17:03:06 -- common/autotest_common.sh@10 -- # set +x 00:24:18.090 ************************************ 00:24:18.090 START TEST raid5f_rebuild_test_sb 00:24:18.090 ************************************ 00:24:18.090 17:03:06 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 true false 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@544 -- # raid_pid=128875 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@545 -- # waitforlisten 128875 /var/tmp/spdk-raid.sock 00:24:18.090 17:03:06 -- common/autotest_common.sh@829 -- # '[' -z 128875 ']' 00:24:18.090 17:03:06 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:18.090 17:03:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:18.090 17:03:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:18.090 17:03:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:18.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:18.090 17:03:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:18.090 17:03:06 -- common/autotest_common.sh@10 -- # set +x 00:24:18.090 [2024-11-05 17:03:06.966616] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:18.090 [2024-11-05 17:03:06.968230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128875 ] 00:24:18.090 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:18.090 Zero copy mechanism will not be used. 00:24:18.348 [2024-11-05 17:03:07.129937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.606 [2024-11-05 17:03:07.354814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.864 [2024-11-05 17:03:07.520440] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:19.122 17:03:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.122 17:03:07 -- common/autotest_common.sh@862 -- # return 0 00:24:19.122 17:03:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:19.122 17:03:07 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:19.122 17:03:07 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:19.380 BaseBdev1_malloc 00:24:19.380 17:03:08 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:19.639 [2024-11-05 17:03:08.358837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:19.639 [2024-11-05 17:03:08.359139] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:19.639 [2024-11-05 17:03:08.359289] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:24:19.639 [2024-11-05 17:03:08.359449] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:19.639 [2024-11-05 17:03:08.361856] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:19.639 [2024-11-05 17:03:08.362029] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:19.639 BaseBdev1 00:24:19.639 17:03:08 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:19.639 17:03:08 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:19.639 17:03:08 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:19.897 BaseBdev2_malloc 00:24:19.897 17:03:08 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:20.154 [2024-11-05 17:03:08.865683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:20.154 [2024-11-05 17:03:08.865952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.154 [2024-11-05 17:03:08.866036] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:20.154 [2024-11-05 17:03:08.866350] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.154 [2024-11-05 17:03:08.868602] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.154 [2024-11-05 17:03:08.868787] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:20.154 BaseBdev2 00:24:20.154 17:03:08 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:20.154 17:03:08 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:20.154 17:03:08 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:20.412 BaseBdev3_malloc 00:24:20.412 17:03:09 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:20.670 [2024-11-05 17:03:09.335543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:20.670 [2024-11-05 17:03:09.335784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.670 [2024-11-05 17:03:09.335871] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:24:20.670 [2024-11-05 17:03:09.336131] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.670 [2024-11-05 17:03:09.338460] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.670 [2024-11-05 17:03:09.338649] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:20.670 BaseBdev3 00:24:20.670 17:03:09 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:20.927 spare_malloc 00:24:20.927 17:03:09 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:20.927 spare_delay 00:24:20.927 17:03:09 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:21.184 [2024-11-05 17:03:09.961690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:21.184 [2024-11-05 17:03:09.961935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.184 [2024-11-05 17:03:09.962009] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:21.184 [2024-11-05 17:03:09.962289] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.184 [2024-11-05 17:03:09.964639] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.184 [2024-11-05 17:03:09.964856] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:21.184 spare 00:24:21.184 17:03:09 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:24:21.443 [2024-11-05 17:03:10.153831] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:21.443 [2024-11-05 17:03:10.155777] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:21.443 [2024-11-05 17:03:10.155967] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:21.443 [2024-11-05 17:03:10.156218] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:24:21.443 [2024-11-05 17:03:10.156267] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:21.443 [2024-11-05 17:03:10.156475] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:24:21.443 [2024-11-05 17:03:10.160699] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:24:21.443 [2024-11-05 17:03:10.160853] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:24:21.443 [2024-11-05 17:03:10.161124] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:21.443 17:03:10 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:21.443 17:03:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:21.443 17:03:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:21.443 17:03:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:21.443 17:03:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:21.443 17:03:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:21.443 17:03:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:21.443 17:03:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:21.443 17:03:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:21.443 17:03:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:21.443 17:03:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.443 17:03:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.701 17:03:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:21.701 "name": "raid_bdev1", 00:24:21.701 "uuid": "934b62a5-79be-4e4a-a487-6e8e4c324f53", 00:24:21.701 "strip_size_kb": 64, 00:24:21.701 "state": "online", 00:24:21.701 "raid_level": "raid5f", 00:24:21.701 "superblock": true, 00:24:21.701 "num_base_bdevs": 3, 00:24:21.701 "num_base_bdevs_discovered": 3, 00:24:21.701 "num_base_bdevs_operational": 3, 00:24:21.701 "base_bdevs_list": [ 00:24:21.701 { 00:24:21.701 "name": "BaseBdev1", 00:24:21.701 "uuid": "a7e04899-2850-5af9-aca2-6b096219f00b", 00:24:21.701 "is_configured": true, 00:24:21.701 "data_offset": 2048, 00:24:21.701 "data_size": 63488 00:24:21.701 }, 00:24:21.701 { 00:24:21.701 "name": "BaseBdev2", 00:24:21.701 "uuid": "727b52ab-f3d0-563d-b571-bc6dd07da3da", 00:24:21.701 "is_configured": true, 00:24:21.701 "data_offset": 2048, 00:24:21.701 "data_size": 63488 00:24:21.701 }, 00:24:21.701 { 00:24:21.701 "name": "BaseBdev3", 00:24:21.701 "uuid": "d19b7aff-84e4-5734-8203-97cc6fb6a298", 00:24:21.701 "is_configured": true, 00:24:21.701 "data_offset": 2048, 00:24:21.701 "data_size": 63488 00:24:21.701 } 00:24:21.701 ] 00:24:21.701 }' 00:24:21.701 17:03:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:21.701 17:03:10 -- common/autotest_common.sh@10 -- # set +x 00:24:22.267 17:03:10 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:22.267 17:03:10 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:22.525 [2024-11-05 17:03:11.174123] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:22.525 17:03:11 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:24:22.525 17:03:11 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.525 17:03:11 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:22.525 17:03:11 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:24:22.525 17:03:11 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:22.525 17:03:11 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:22.525 17:03:11 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:22.525 17:03:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:22.525 17:03:11 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:22.525 17:03:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:22.525 17:03:11 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:22.525 17:03:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:22.525 17:03:11 -- bdev/nbd_common.sh@12 -- # local i 00:24:22.525 17:03:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:22.525 17:03:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:22.525 17:03:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:22.783 [2024-11-05 17:03:11.626944] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:24:22.783 /dev/nbd0 00:24:23.041 17:03:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:23.041 17:03:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:23.041 17:03:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:23.041 17:03:11 -- common/autotest_common.sh@867 -- # local i 00:24:23.041 17:03:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:23.041 17:03:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:23.041 17:03:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:23.041 17:03:11 -- common/autotest_common.sh@871 -- # break 00:24:23.041 17:03:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:23.041 17:03:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:23.041 17:03:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:23.041 1+0 records in 00:24:23.041 1+0 records out 00:24:23.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039096 s, 10.5 MB/s 00:24:23.041 17:03:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:23.041 17:03:11 -- common/autotest_common.sh@884 -- # size=4096 00:24:23.041 17:03:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:23.041 17:03:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:23.041 17:03:11 -- common/autotest_common.sh@887 -- # return 0 00:24:23.041 17:03:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:23.041 17:03:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:23.041 17:03:11 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:23.041 17:03:11 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:24:23.041 17:03:11 -- bdev/bdev_raid.sh@582 -- # echo 128 00:24:23.041 17:03:11 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:24:23.299 496+0 records in 00:24:23.299 496+0 records out 00:24:23.299 65011712 bytes (65 MB, 62 MiB) copied, 0.407027 s, 160 MB/s 00:24:23.299 17:03:12 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:23.299 17:03:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:23.299 17:03:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:23.299 17:03:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:23.299 17:03:12 -- bdev/nbd_common.sh@51 -- # local i 00:24:23.299 17:03:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:23.299 17:03:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:23.558 17:03:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:23.558 17:03:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:23.558 17:03:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:23.558 17:03:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:23.558 17:03:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:23.558 17:03:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:23.558 17:03:12 -- bdev/nbd_common.sh@41 -- # break 00:24:23.558 17:03:12 -- bdev/nbd_common.sh@45 -- # return 0 00:24:23.558 17:03:12 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:23.558 [2024-11-05 17:03:12.375889] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:23.817 [2024-11-05 17:03:12.601854] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:23.817 17:03:12 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:23.817 17:03:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:23.817 17:03:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:23.817 17:03:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:23.817 17:03:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:23.817 17:03:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:23.817 17:03:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:23.817 17:03:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:23.817 17:03:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:23.817 17:03:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:23.817 17:03:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.817 17:03:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.075 17:03:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:24.075 "name": "raid_bdev1", 00:24:24.075 "uuid": "934b62a5-79be-4e4a-a487-6e8e4c324f53", 00:24:24.075 "strip_size_kb": 64, 00:24:24.075 "state": "online", 00:24:24.075 "raid_level": "raid5f", 00:24:24.075 "superblock": true, 00:24:24.075 "num_base_bdevs": 3, 00:24:24.075 "num_base_bdevs_discovered": 2, 00:24:24.075 "num_base_bdevs_operational": 2, 00:24:24.075 "base_bdevs_list": [ 00:24:24.075 { 00:24:24.075 "name": null, 00:24:24.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.075 "is_configured": false, 00:24:24.075 "data_offset": 2048, 00:24:24.075 "data_size": 63488 00:24:24.075 }, 00:24:24.075 { 00:24:24.075 "name": "BaseBdev2", 00:24:24.075 "uuid": "727b52ab-f3d0-563d-b571-bc6dd07da3da", 00:24:24.075 "is_configured": true, 00:24:24.075 "data_offset": 2048, 00:24:24.075 "data_size": 63488 00:24:24.075 }, 00:24:24.075 { 00:24:24.075 "name": "BaseBdev3", 00:24:24.075 "uuid": "d19b7aff-84e4-5734-8203-97cc6fb6a298", 00:24:24.075 "is_configured": true, 00:24:24.075 "data_offset": 2048, 00:24:24.075 "data_size": 63488 00:24:24.075 } 00:24:24.075 ] 00:24:24.075 }' 00:24:24.075 17:03:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:24.075 17:03:12 -- common/autotest_common.sh@10 -- # set +x 00:24:24.642 17:03:13 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:24.901 [2024-11-05 17:03:13.702077] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:24.901 [2024-11-05 17:03:13.703264] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:24.901 [2024-11-05 17:03:13.714497] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028b70 00:24:24.901 17:03:13 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:24.901 [2024-11-05 17:03:13.729564] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:25.834 17:03:14 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:25.835 17:03:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:25.835 17:03:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:25.835 17:03:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:25.835 17:03:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:26.094 17:03:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.094 17:03:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.094 17:03:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:26.094 "name": "raid_bdev1", 00:24:26.094 "uuid": "934b62a5-79be-4e4a-a487-6e8e4c324f53", 00:24:26.094 "strip_size_kb": 64, 00:24:26.094 "state": "online", 00:24:26.094 "raid_level": "raid5f", 00:24:26.094 "superblock": true, 00:24:26.094 "num_base_bdevs": 3, 00:24:26.094 "num_base_bdevs_discovered": 3, 00:24:26.094 "num_base_bdevs_operational": 3, 00:24:26.094 "process": { 00:24:26.094 "type": "rebuild", 00:24:26.094 "target": "spare", 00:24:26.094 "progress": { 00:24:26.094 "blocks": 22528, 00:24:26.094 "percent": 17 00:24:26.094 } 00:24:26.094 }, 00:24:26.094 "base_bdevs_list": [ 00:24:26.094 { 00:24:26.094 "name": "spare", 00:24:26.094 "uuid": "bf3ce5d9-3b27-5a44-b2cb-d8d2b134f72d", 00:24:26.094 "is_configured": true, 00:24:26.094 "data_offset": 2048, 00:24:26.094 "data_size": 63488 00:24:26.094 }, 00:24:26.094 { 00:24:26.094 "name": "BaseBdev2", 00:24:26.094 "uuid": "727b52ab-f3d0-563d-b571-bc6dd07da3da", 00:24:26.094 "is_configured": true, 00:24:26.094 "data_offset": 2048, 00:24:26.094 "data_size": 63488 00:24:26.094 }, 00:24:26.094 { 00:24:26.094 "name": "BaseBdev3", 00:24:26.094 "uuid": "d19b7aff-84e4-5734-8203-97cc6fb6a298", 00:24:26.094 "is_configured": true, 00:24:26.094 "data_offset": 2048, 00:24:26.094 "data_size": 63488 00:24:26.094 } 00:24:26.094 ] 00:24:26.094 }' 00:24:26.094 17:03:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:26.372 17:03:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:26.372 17:03:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:26.372 17:03:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:26.372 17:03:15 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:26.372 [2024-11-05 17:03:15.227433] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:26.372 [2024-11-05 17:03:15.241910] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:26.372 [2024-11-05 17:03:15.242116] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:26.643 17:03:15 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:26.643 17:03:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:26.643 17:03:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:26.643 17:03:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:26.643 17:03:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:26.643 17:03:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:26.643 17:03:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:26.643 17:03:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:26.643 17:03:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:26.643 17:03:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:26.643 17:03:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.643 17:03:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.643 17:03:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:26.643 "name": "raid_bdev1", 00:24:26.643 "uuid": "934b62a5-79be-4e4a-a487-6e8e4c324f53", 00:24:26.643 "strip_size_kb": 64, 00:24:26.643 "state": "online", 00:24:26.643 "raid_level": "raid5f", 00:24:26.643 "superblock": true, 00:24:26.643 "num_base_bdevs": 3, 00:24:26.643 "num_base_bdevs_discovered": 2, 00:24:26.643 "num_base_bdevs_operational": 2, 00:24:26.643 "base_bdevs_list": [ 00:24:26.643 { 00:24:26.643 "name": null, 00:24:26.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.643 "is_configured": false, 00:24:26.643 "data_offset": 2048, 00:24:26.643 "data_size": 63488 00:24:26.643 }, 00:24:26.643 { 00:24:26.643 "name": "BaseBdev2", 00:24:26.643 "uuid": "727b52ab-f3d0-563d-b571-bc6dd07da3da", 00:24:26.643 "is_configured": true, 00:24:26.643 "data_offset": 2048, 00:24:26.643 "data_size": 63488 00:24:26.643 }, 00:24:26.643 { 00:24:26.643 "name": "BaseBdev3", 00:24:26.643 "uuid": "d19b7aff-84e4-5734-8203-97cc6fb6a298", 00:24:26.643 "is_configured": true, 00:24:26.643 "data_offset": 2048, 00:24:26.643 "data_size": 63488 00:24:26.643 } 00:24:26.643 ] 00:24:26.643 }' 00:24:26.643 17:03:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:26.643 17:03:15 -- common/autotest_common.sh@10 -- # set +x 00:24:27.577 17:03:16 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:27.577 17:03:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:27.577 17:03:16 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:27.577 17:03:16 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:27.577 17:03:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:27.577 17:03:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.577 17:03:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.577 17:03:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:27.577 "name": "raid_bdev1", 00:24:27.577 "uuid": "934b62a5-79be-4e4a-a487-6e8e4c324f53", 00:24:27.577 "strip_size_kb": 64, 00:24:27.577 "state": "online", 00:24:27.577 "raid_level": "raid5f", 00:24:27.577 "superblock": true, 00:24:27.577 "num_base_bdevs": 3, 00:24:27.577 "num_base_bdevs_discovered": 2, 00:24:27.577 "num_base_bdevs_operational": 2, 00:24:27.577 "base_bdevs_list": [ 00:24:27.577 { 00:24:27.577 "name": null, 00:24:27.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.577 "is_configured": false, 00:24:27.577 "data_offset": 2048, 00:24:27.577 "data_size": 63488 00:24:27.577 }, 00:24:27.577 { 00:24:27.577 "name": "BaseBdev2", 00:24:27.577 "uuid": "727b52ab-f3d0-563d-b571-bc6dd07da3da", 00:24:27.577 "is_configured": true, 00:24:27.577 "data_offset": 2048, 00:24:27.577 "data_size": 63488 00:24:27.577 }, 00:24:27.577 { 00:24:27.577 "name": "BaseBdev3", 00:24:27.577 "uuid": "d19b7aff-84e4-5734-8203-97cc6fb6a298", 00:24:27.577 "is_configured": true, 00:24:27.577 "data_offset": 2048, 00:24:27.577 "data_size": 63488 00:24:27.577 } 00:24:27.577 ] 00:24:27.577 }' 00:24:27.577 17:03:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:27.577 17:03:16 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:27.577 17:03:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:27.835 17:03:16 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:27.835 17:03:16 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:27.835 [2024-11-05 17:03:16.714283] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:27.835 [2024-11-05 17:03:16.714443] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:27.835 [2024-11-05 17:03:16.724457] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028d10 00:24:27.835 [2024-11-05 17:03:16.730033] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:28.093 17:03:16 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:29.025 17:03:17 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:29.026 17:03:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:29.026 17:03:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:29.026 17:03:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:29.026 17:03:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:29.026 17:03:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.026 17:03:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.283 17:03:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:29.283 "name": "raid_bdev1", 00:24:29.283 "uuid": "934b62a5-79be-4e4a-a487-6e8e4c324f53", 00:24:29.283 "strip_size_kb": 64, 00:24:29.283 "state": "online", 00:24:29.283 "raid_level": "raid5f", 00:24:29.283 "superblock": true, 00:24:29.283 "num_base_bdevs": 3, 00:24:29.283 "num_base_bdevs_discovered": 3, 00:24:29.283 "num_base_bdevs_operational": 3, 00:24:29.283 "process": { 00:24:29.283 "type": "rebuild", 00:24:29.283 "target": "spare", 00:24:29.283 "progress": { 00:24:29.283 "blocks": 24576, 00:24:29.283 "percent": 19 00:24:29.283 } 00:24:29.283 }, 00:24:29.283 "base_bdevs_list": [ 00:24:29.283 { 00:24:29.283 "name": "spare", 00:24:29.283 "uuid": "bf3ce5d9-3b27-5a44-b2cb-d8d2b134f72d", 00:24:29.283 "is_configured": true, 00:24:29.283 "data_offset": 2048, 00:24:29.283 "data_size": 63488 00:24:29.283 }, 00:24:29.283 { 00:24:29.283 "name": "BaseBdev2", 00:24:29.283 "uuid": "727b52ab-f3d0-563d-b571-bc6dd07da3da", 00:24:29.283 "is_configured": true, 00:24:29.283 "data_offset": 2048, 00:24:29.283 "data_size": 63488 00:24:29.283 }, 00:24:29.283 { 00:24:29.283 "name": "BaseBdev3", 00:24:29.283 "uuid": "d19b7aff-84e4-5734-8203-97cc6fb6a298", 00:24:29.283 "is_configured": true, 00:24:29.283 "data_offset": 2048, 00:24:29.283 "data_size": 63488 00:24:29.283 } 00:24:29.283 ] 00:24:29.283 }' 00:24:29.283 17:03:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:29.283 17:03:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:29.283 17:03:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:29.283 17:03:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:29.283 17:03:18 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:29.283 17:03:18 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:29.283 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:29.283 17:03:18 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:24:29.283 17:03:18 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:29.283 17:03:18 -- bdev/bdev_raid.sh@657 -- # local timeout=638 00:24:29.283 17:03:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:29.283 17:03:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:29.283 17:03:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:29.283 17:03:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:29.283 17:03:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:29.283 17:03:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:29.283 17:03:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.283 17:03:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.540 17:03:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:29.540 "name": "raid_bdev1", 00:24:29.540 "uuid": "934b62a5-79be-4e4a-a487-6e8e4c324f53", 00:24:29.540 "strip_size_kb": 64, 00:24:29.540 "state": "online", 00:24:29.540 "raid_level": "raid5f", 00:24:29.540 "superblock": true, 00:24:29.540 "num_base_bdevs": 3, 00:24:29.540 "num_base_bdevs_discovered": 3, 00:24:29.540 "num_base_bdevs_operational": 3, 00:24:29.540 "process": { 00:24:29.540 "type": "rebuild", 00:24:29.540 "target": "spare", 00:24:29.540 "progress": { 00:24:29.540 "blocks": 30720, 00:24:29.540 "percent": 24 00:24:29.540 } 00:24:29.540 }, 00:24:29.540 "base_bdevs_list": [ 00:24:29.540 { 00:24:29.540 "name": "spare", 00:24:29.540 "uuid": "bf3ce5d9-3b27-5a44-b2cb-d8d2b134f72d", 00:24:29.540 "is_configured": true, 00:24:29.540 "data_offset": 2048, 00:24:29.540 "data_size": 63488 00:24:29.540 }, 00:24:29.540 { 00:24:29.540 "name": "BaseBdev2", 00:24:29.540 "uuid": "727b52ab-f3d0-563d-b571-bc6dd07da3da", 00:24:29.540 "is_configured": true, 00:24:29.540 "data_offset": 2048, 00:24:29.540 "data_size": 63488 00:24:29.540 }, 00:24:29.540 { 00:24:29.540 "name": "BaseBdev3", 00:24:29.540 "uuid": "d19b7aff-84e4-5734-8203-97cc6fb6a298", 00:24:29.540 "is_configured": true, 00:24:29.540 "data_offset": 2048, 00:24:29.540 "data_size": 63488 00:24:29.540 } 00:24:29.540 ] 00:24:29.540 }' 00:24:29.540 17:03:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:29.540 17:03:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:29.540 17:03:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:29.540 17:03:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:29.540 17:03:18 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:30.914 17:03:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:30.914 17:03:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:30.914 17:03:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:30.914 17:03:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:30.914 17:03:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:30.914 17:03:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:30.914 17:03:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.914 17:03:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.914 17:03:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:30.914 "name": "raid_bdev1", 00:24:30.914 "uuid": "934b62a5-79be-4e4a-a487-6e8e4c324f53", 00:24:30.914 "strip_size_kb": 64, 00:24:30.914 "state": "online", 00:24:30.914 "raid_level": "raid5f", 00:24:30.914 "superblock": true, 00:24:30.914 "num_base_bdevs": 3, 00:24:30.914 "num_base_bdevs_discovered": 3, 00:24:30.914 "num_base_bdevs_operational": 3, 00:24:30.914 "process": { 00:24:30.914 "type": "rebuild", 00:24:30.914 "target": "spare", 00:24:30.914 "progress": { 00:24:30.914 "blocks": 57344, 00:24:30.914 "percent": 45 00:24:30.914 } 00:24:30.914 }, 00:24:30.914 "base_bdevs_list": [ 00:24:30.914 { 00:24:30.914 "name": "spare", 00:24:30.914 "uuid": "bf3ce5d9-3b27-5a44-b2cb-d8d2b134f72d", 00:24:30.914 "is_configured": true, 00:24:30.914 "data_offset": 2048, 00:24:30.914 "data_size": 63488 00:24:30.914 }, 00:24:30.914 { 00:24:30.914 "name": "BaseBdev2", 00:24:30.914 "uuid": "727b52ab-f3d0-563d-b571-bc6dd07da3da", 00:24:30.914 "is_configured": true, 00:24:30.914 "data_offset": 2048, 00:24:30.914 "data_size": 63488 00:24:30.914 }, 00:24:30.914 { 00:24:30.914 "name": "BaseBdev3", 00:24:30.914 "uuid": "d19b7aff-84e4-5734-8203-97cc6fb6a298", 00:24:30.914 "is_configured": true, 00:24:30.914 "data_offset": 2048, 00:24:30.914 "data_size": 63488 00:24:30.914 } 00:24:30.914 ] 00:24:30.914 }' 00:24:30.914 17:03:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:30.914 17:03:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:30.914 17:03:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:30.914 17:03:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:30.914 17:03:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:31.846 17:03:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:31.846 17:03:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:31.846 17:03:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:31.846 17:03:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:31.846 17:03:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:31.846 17:03:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:31.846 17:03:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.846 17:03:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.105 17:03:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:32.105 "name": "raid_bdev1", 00:24:32.105 "uuid": "934b62a5-79be-4e4a-a487-6e8e4c324f53", 00:24:32.105 "strip_size_kb": 64, 00:24:32.105 "state": "online", 00:24:32.105 "raid_level": "raid5f", 00:24:32.105 "superblock": true, 00:24:32.105 "num_base_bdevs": 3, 00:24:32.105 "num_base_bdevs_discovered": 3, 00:24:32.105 "num_base_bdevs_operational": 3, 00:24:32.105 "process": { 00:24:32.105 "type": "rebuild", 00:24:32.105 "target": "spare", 00:24:32.105 "progress": { 00:24:32.105 "blocks": 83968, 00:24:32.105 "percent": 66 00:24:32.105 } 00:24:32.105 }, 00:24:32.105 "base_bdevs_list": [ 00:24:32.105 { 00:24:32.105 "name": "spare", 00:24:32.105 "uuid": "bf3ce5d9-3b27-5a44-b2cb-d8d2b134f72d", 00:24:32.105 "is_configured": true, 00:24:32.105 "data_offset": 2048, 00:24:32.105 "data_size": 63488 00:24:32.105 }, 00:24:32.105 { 00:24:32.105 "name": "BaseBdev2", 00:24:32.105 "uuid": "727b52ab-f3d0-563d-b571-bc6dd07da3da", 00:24:32.105 "is_configured": true, 00:24:32.105 "data_offset": 2048, 00:24:32.105 "data_size": 63488 00:24:32.105 }, 00:24:32.105 { 00:24:32.105 "name": "BaseBdev3", 00:24:32.105 "uuid": "d19b7aff-84e4-5734-8203-97cc6fb6a298", 00:24:32.105 "is_configured": true, 00:24:32.105 "data_offset": 2048, 00:24:32.105 "data_size": 63488 00:24:32.105 } 00:24:32.105 ] 00:24:32.105 }' 00:24:32.105 17:03:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:32.363 17:03:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:32.363 17:03:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:32.363 17:03:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:32.363 17:03:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:33.297 17:03:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:33.297 17:03:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:33.297 17:03:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:33.297 17:03:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:33.297 17:03:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:33.297 17:03:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:33.297 17:03:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.297 17:03:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.556 17:03:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:33.556 "name": "raid_bdev1", 00:24:33.556 "uuid": "934b62a5-79be-4e4a-a487-6e8e4c324f53", 00:24:33.556 "strip_size_kb": 64, 00:24:33.556 "state": "online", 00:24:33.556 "raid_level": "raid5f", 00:24:33.556 "superblock": true, 00:24:33.556 "num_base_bdevs": 3, 00:24:33.556 "num_base_bdevs_discovered": 3, 00:24:33.556 "num_base_bdevs_operational": 3, 00:24:33.556 "process": { 00:24:33.556 "type": "rebuild", 00:24:33.556 "target": "spare", 00:24:33.556 "progress": { 00:24:33.556 "blocks": 112640, 00:24:33.556 "percent": 88 00:24:33.556 } 00:24:33.556 }, 00:24:33.556 "base_bdevs_list": [ 00:24:33.556 { 00:24:33.556 "name": "spare", 00:24:33.556 "uuid": "bf3ce5d9-3b27-5a44-b2cb-d8d2b134f72d", 00:24:33.556 "is_configured": true, 00:24:33.556 "data_offset": 2048, 00:24:33.556 "data_size": 63488 00:24:33.556 }, 00:24:33.556 { 00:24:33.556 "name": "BaseBdev2", 00:24:33.556 "uuid": "727b52ab-f3d0-563d-b571-bc6dd07da3da", 00:24:33.556 "is_configured": true, 00:24:33.556 "data_offset": 2048, 00:24:33.556 "data_size": 63488 00:24:33.556 }, 00:24:33.556 { 00:24:33.556 "name": "BaseBdev3", 00:24:33.556 "uuid": "d19b7aff-84e4-5734-8203-97cc6fb6a298", 00:24:33.556 "is_configured": true, 00:24:33.556 "data_offset": 2048, 00:24:33.556 "data_size": 63488 00:24:33.556 } 00:24:33.556 ] 00:24:33.556 }' 00:24:33.556 17:03:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:33.556 17:03:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:33.556 17:03:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:33.556 17:03:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:33.556 17:03:22 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:34.123 [2024-11-05 17:03:22.983644] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:34.123 [2024-11-05 17:03:22.983863] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:34.123 [2024-11-05 17:03:22.984130] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:34.689 17:03:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:34.689 17:03:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:34.689 17:03:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:34.689 17:03:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:34.689 17:03:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:34.689 17:03:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:34.689 17:03:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.689 17:03:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.947 17:03:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:34.947 "name": "raid_bdev1", 00:24:34.947 "uuid": "934b62a5-79be-4e4a-a487-6e8e4c324f53", 00:24:34.947 "strip_size_kb": 64, 00:24:34.947 "state": "online", 00:24:34.947 "raid_level": "raid5f", 00:24:34.947 "superblock": true, 00:24:34.947 "num_base_bdevs": 3, 00:24:34.947 "num_base_bdevs_discovered": 3, 00:24:34.947 "num_base_bdevs_operational": 3, 00:24:34.947 "base_bdevs_list": [ 00:24:34.947 { 00:24:34.947 "name": "spare", 00:24:34.947 "uuid": "bf3ce5d9-3b27-5a44-b2cb-d8d2b134f72d", 00:24:34.947 "is_configured": true, 00:24:34.947 "data_offset": 2048, 00:24:34.947 "data_size": 63488 00:24:34.947 }, 00:24:34.947 { 00:24:34.947 "name": "BaseBdev2", 00:24:34.947 "uuid": "727b52ab-f3d0-563d-b571-bc6dd07da3da", 00:24:34.947 "is_configured": true, 00:24:34.947 "data_offset": 2048, 00:24:34.947 "data_size": 63488 00:24:34.947 }, 00:24:34.947 { 00:24:34.947 "name": "BaseBdev3", 00:24:34.947 "uuid": "d19b7aff-84e4-5734-8203-97cc6fb6a298", 00:24:34.947 "is_configured": true, 00:24:34.947 "data_offset": 2048, 00:24:34.947 "data_size": 63488 00:24:34.947 } 00:24:34.947 ] 00:24:34.947 }' 00:24:34.947 17:03:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:34.947 17:03:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:34.947 17:03:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:34.947 17:03:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:34.947 17:03:23 -- bdev/bdev_raid.sh@660 -- # break 00:24:34.947 17:03:23 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:34.947 17:03:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:34.947 17:03:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:34.947 17:03:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:34.947 17:03:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:34.947 17:03:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.947 17:03:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.205 17:03:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:35.205 "name": "raid_bdev1", 00:24:35.205 "uuid": "934b62a5-79be-4e4a-a487-6e8e4c324f53", 00:24:35.205 "strip_size_kb": 64, 00:24:35.205 "state": "online", 00:24:35.205 "raid_level": "raid5f", 00:24:35.205 "superblock": true, 00:24:35.205 "num_base_bdevs": 3, 00:24:35.205 "num_base_bdevs_discovered": 3, 00:24:35.205 "num_base_bdevs_operational": 3, 00:24:35.205 "base_bdevs_list": [ 00:24:35.205 { 00:24:35.205 "name": "spare", 00:24:35.205 "uuid": "bf3ce5d9-3b27-5a44-b2cb-d8d2b134f72d", 00:24:35.205 "is_configured": true, 00:24:35.205 "data_offset": 2048, 00:24:35.205 "data_size": 63488 00:24:35.205 }, 00:24:35.205 { 00:24:35.205 "name": "BaseBdev2", 00:24:35.205 "uuid": "727b52ab-f3d0-563d-b571-bc6dd07da3da", 00:24:35.205 "is_configured": true, 00:24:35.205 "data_offset": 2048, 00:24:35.205 "data_size": 63488 00:24:35.205 }, 00:24:35.205 { 00:24:35.205 "name": "BaseBdev3", 00:24:35.205 "uuid": "d19b7aff-84e4-5734-8203-97cc6fb6a298", 00:24:35.205 "is_configured": true, 00:24:35.205 "data_offset": 2048, 00:24:35.205 "data_size": 63488 00:24:35.205 } 00:24:35.205 ] 00:24:35.205 }' 00:24:35.205 17:03:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:35.205 17:03:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:35.205 17:03:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:35.463 17:03:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:35.463 17:03:24 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:35.463 17:03:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:35.463 17:03:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:35.463 17:03:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:35.463 17:03:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:35.463 17:03:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:35.463 17:03:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:35.463 17:03:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:35.463 17:03:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:35.463 17:03:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:35.463 17:03:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.463 17:03:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.720 17:03:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:35.720 "name": "raid_bdev1", 00:24:35.720 "uuid": "934b62a5-79be-4e4a-a487-6e8e4c324f53", 00:24:35.720 "strip_size_kb": 64, 00:24:35.720 "state": "online", 00:24:35.720 "raid_level": "raid5f", 00:24:35.720 "superblock": true, 00:24:35.720 "num_base_bdevs": 3, 00:24:35.720 "num_base_bdevs_discovered": 3, 00:24:35.720 "num_base_bdevs_operational": 3, 00:24:35.720 "base_bdevs_list": [ 00:24:35.720 { 00:24:35.720 "name": "spare", 00:24:35.720 "uuid": "bf3ce5d9-3b27-5a44-b2cb-d8d2b134f72d", 00:24:35.720 "is_configured": true, 00:24:35.720 "data_offset": 2048, 00:24:35.720 "data_size": 63488 00:24:35.720 }, 00:24:35.720 { 00:24:35.720 "name": "BaseBdev2", 00:24:35.720 "uuid": "727b52ab-f3d0-563d-b571-bc6dd07da3da", 00:24:35.720 "is_configured": true, 00:24:35.720 "data_offset": 2048, 00:24:35.720 "data_size": 63488 00:24:35.720 }, 00:24:35.720 { 00:24:35.720 "name": "BaseBdev3", 00:24:35.720 "uuid": "d19b7aff-84e4-5734-8203-97cc6fb6a298", 00:24:35.720 "is_configured": true, 00:24:35.720 "data_offset": 2048, 00:24:35.720 "data_size": 63488 00:24:35.720 } 00:24:35.720 ] 00:24:35.720 }' 00:24:35.720 17:03:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:35.720 17:03:24 -- common/autotest_common.sh@10 -- # set +x 00:24:36.287 17:03:24 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:36.545 [2024-11-05 17:03:25.184839] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:36.545 [2024-11-05 17:03:25.185018] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:36.545 [2024-11-05 17:03:25.185208] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:36.545 [2024-11-05 17:03:25.185415] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:36.545 [2024-11-05 17:03:25.185523] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:24:36.545 17:03:25 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.545 17:03:25 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:36.802 17:03:25 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:36.802 17:03:25 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:36.802 17:03:25 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:36.802 17:03:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:36.802 17:03:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:36.802 17:03:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:36.802 17:03:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:36.802 17:03:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:36.802 17:03:25 -- bdev/nbd_common.sh@12 -- # local i 00:24:36.802 17:03:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:36.802 17:03:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:36.802 17:03:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:37.060 /dev/nbd0 00:24:37.060 17:03:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:37.060 17:03:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:37.060 17:03:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:37.060 17:03:25 -- common/autotest_common.sh@867 -- # local i 00:24:37.060 17:03:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:37.060 17:03:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:37.060 17:03:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:37.060 17:03:25 -- common/autotest_common.sh@871 -- # break 00:24:37.060 17:03:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:37.060 17:03:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:37.060 17:03:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:37.060 1+0 records in 00:24:37.060 1+0 records out 00:24:37.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551218 s, 7.4 MB/s 00:24:37.060 17:03:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.060 17:03:25 -- common/autotest_common.sh@884 -- # size=4096 00:24:37.060 17:03:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.060 17:03:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:37.060 17:03:25 -- common/autotest_common.sh@887 -- # return 0 00:24:37.060 17:03:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:37.060 17:03:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:37.060 17:03:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:37.318 /dev/nbd1 00:24:37.318 17:03:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:37.318 17:03:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:37.318 17:03:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:37.318 17:03:26 -- common/autotest_common.sh@867 -- # local i 00:24:37.318 17:03:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:37.318 17:03:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:37.318 17:03:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:37.318 17:03:26 -- common/autotest_common.sh@871 -- # break 00:24:37.318 17:03:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:37.318 17:03:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:37.318 17:03:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:37.318 1+0 records in 00:24:37.318 1+0 records out 00:24:37.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582249 s, 7.0 MB/s 00:24:37.318 17:03:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.318 17:03:26 -- common/autotest_common.sh@884 -- # size=4096 00:24:37.318 17:03:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.318 17:03:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:37.318 17:03:26 -- common/autotest_common.sh@887 -- # return 0 00:24:37.318 17:03:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:37.318 17:03:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:37.318 17:03:26 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:37.576 17:03:26 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:37.576 17:03:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:37.576 17:03:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:37.576 17:03:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:37.576 17:03:26 -- bdev/nbd_common.sh@51 -- # local i 00:24:37.576 17:03:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:37.576 17:03:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:37.833 17:03:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:37.833 17:03:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:37.833 17:03:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:37.833 17:03:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:37.833 17:03:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:37.833 17:03:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:37.833 17:03:26 -- bdev/nbd_common.sh@41 -- # break 00:24:37.833 17:03:26 -- bdev/nbd_common.sh@45 -- # return 0 00:24:37.833 17:03:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:37.833 17:03:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:38.092 17:03:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:38.092 17:03:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:38.092 17:03:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:38.092 17:03:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:38.092 17:03:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:38.092 17:03:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:38.092 17:03:26 -- bdev/nbd_common.sh@41 -- # break 00:24:38.092 17:03:26 -- bdev/nbd_common.sh@45 -- # return 0 00:24:38.092 17:03:26 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:38.092 17:03:26 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:38.092 17:03:26 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:38.092 17:03:26 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:38.350 17:03:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:38.350 [2024-11-05 17:03:27.189113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:38.350 [2024-11-05 17:03:27.189396] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.350 [2024-11-05 17:03:27.189555] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:38.350 [2024-11-05 17:03:27.189681] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.350 [2024-11-05 17:03:27.191836] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.350 [2024-11-05 17:03:27.192030] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:38.350 [2024-11-05 17:03:27.192243] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:38.350 [2024-11-05 17:03:27.192418] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:38.350 BaseBdev1 00:24:38.350 17:03:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:38.350 17:03:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:24:38.350 17:03:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:38.608 17:03:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:38.866 [2024-11-05 17:03:27.709187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:38.866 [2024-11-05 17:03:27.709375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.866 [2024-11-05 17:03:27.709451] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:38.866 [2024-11-05 17:03:27.709649] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.866 [2024-11-05 17:03:27.710072] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.866 [2024-11-05 17:03:27.710270] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:38.866 [2024-11-05 17:03:27.710472] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:38.866 [2024-11-05 17:03:27.710589] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:38.866 [2024-11-05 17:03:27.710692] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:38.866 [2024-11-05 17:03:27.710749] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state configuring 00:24:38.866 [2024-11-05 17:03:27.710923] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:38.866 BaseBdev2 00:24:38.866 17:03:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:38.866 17:03:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:24:38.866 17:03:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:24:39.124 17:03:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:39.382 [2024-11-05 17:03:28.169280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:39.382 [2024-11-05 17:03:28.169472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.382 [2024-11-05 17:03:28.169549] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:24:39.382 [2024-11-05 17:03:28.169792] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.382 [2024-11-05 17:03:28.170220] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.382 [2024-11-05 17:03:28.170402] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:39.382 [2024-11-05 17:03:28.170599] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:24:39.382 [2024-11-05 17:03:28.170728] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:39.382 BaseBdev3 00:24:39.382 17:03:28 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:39.639 17:03:28 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:39.898 [2024-11-05 17:03:28.669441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:39.898 [2024-11-05 17:03:28.669642] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.898 [2024-11-05 17:03:28.669717] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:39.898 [2024-11-05 17:03:28.669963] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.898 [2024-11-05 17:03:28.670522] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.898 [2024-11-05 17:03:28.670710] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:39.898 [2024-11-05 17:03:28.670937] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:39.898 [2024-11-05 17:03:28.671062] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:39.898 spare 00:24:39.898 17:03:28 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:39.898 17:03:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:39.898 17:03:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:39.898 17:03:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:39.898 17:03:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:39.898 17:03:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:39.898 17:03:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:39.898 17:03:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:39.898 17:03:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:39.898 17:03:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:39.898 17:03:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.898 17:03:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.898 [2024-11-05 17:03:28.771225] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b480 00:24:39.898 [2024-11-05 17:03:28.771412] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:39.898 [2024-11-05 17:03:28.771562] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:24:39.898 [2024-11-05 17:03:28.775721] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b480 00:24:39.898 [2024-11-05 17:03:28.775891] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b480 00:24:39.898 [2024-11-05 17:03:28.776165] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.156 17:03:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:40.156 "name": "raid_bdev1", 00:24:40.156 "uuid": "934b62a5-79be-4e4a-a487-6e8e4c324f53", 00:24:40.156 "strip_size_kb": 64, 00:24:40.156 "state": "online", 00:24:40.156 "raid_level": "raid5f", 00:24:40.156 "superblock": true, 00:24:40.156 "num_base_bdevs": 3, 00:24:40.156 "num_base_bdevs_discovered": 3, 00:24:40.156 "num_base_bdevs_operational": 3, 00:24:40.156 "base_bdevs_list": [ 00:24:40.156 { 00:24:40.156 "name": "spare", 00:24:40.156 "uuid": "bf3ce5d9-3b27-5a44-b2cb-d8d2b134f72d", 00:24:40.156 "is_configured": true, 00:24:40.156 "data_offset": 2048, 00:24:40.156 "data_size": 63488 00:24:40.156 }, 00:24:40.156 { 00:24:40.156 "name": "BaseBdev2", 00:24:40.156 "uuid": "727b52ab-f3d0-563d-b571-bc6dd07da3da", 00:24:40.156 "is_configured": true, 00:24:40.156 "data_offset": 2048, 00:24:40.156 "data_size": 63488 00:24:40.156 }, 00:24:40.156 { 00:24:40.156 "name": "BaseBdev3", 00:24:40.156 "uuid": "d19b7aff-84e4-5734-8203-97cc6fb6a298", 00:24:40.156 "is_configured": true, 00:24:40.156 "data_offset": 2048, 00:24:40.156 "data_size": 63488 00:24:40.156 } 00:24:40.156 ] 00:24:40.156 }' 00:24:40.156 17:03:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:40.156 17:03:28 -- common/autotest_common.sh@10 -- # set +x 00:24:40.722 17:03:29 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:40.722 17:03:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:40.722 17:03:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:40.722 17:03:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:40.722 17:03:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:40.722 17:03:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.722 17:03:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.979 17:03:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:40.979 "name": "raid_bdev1", 00:24:40.979 "uuid": "934b62a5-79be-4e4a-a487-6e8e4c324f53", 00:24:40.979 "strip_size_kb": 64, 00:24:40.979 "state": "online", 00:24:40.979 "raid_level": "raid5f", 00:24:40.979 "superblock": true, 00:24:40.979 "num_base_bdevs": 3, 00:24:40.979 "num_base_bdevs_discovered": 3, 00:24:40.979 "num_base_bdevs_operational": 3, 00:24:40.979 "base_bdevs_list": [ 00:24:40.979 { 00:24:40.979 "name": "spare", 00:24:40.979 "uuid": "bf3ce5d9-3b27-5a44-b2cb-d8d2b134f72d", 00:24:40.980 "is_configured": true, 00:24:40.980 "data_offset": 2048, 00:24:40.980 "data_size": 63488 00:24:40.980 }, 00:24:40.980 { 00:24:40.980 "name": "BaseBdev2", 00:24:40.980 "uuid": "727b52ab-f3d0-563d-b571-bc6dd07da3da", 00:24:40.980 "is_configured": true, 00:24:40.980 "data_offset": 2048, 00:24:40.980 "data_size": 63488 00:24:40.980 }, 00:24:40.980 { 00:24:40.980 "name": "BaseBdev3", 00:24:40.980 "uuid": "d19b7aff-84e4-5734-8203-97cc6fb6a298", 00:24:40.980 "is_configured": true, 00:24:40.980 "data_offset": 2048, 00:24:40.980 "data_size": 63488 00:24:40.980 } 00:24:40.980 ] 00:24:40.980 }' 00:24:40.980 17:03:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:40.980 17:03:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:40.980 17:03:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:40.980 17:03:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:40.980 17:03:29 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.980 17:03:29 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:41.237 17:03:30 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:41.237 17:03:30 -- bdev/bdev_raid.sh@709 -- # killprocess 128875 00:24:41.237 17:03:30 -- common/autotest_common.sh@936 -- # '[' -z 128875 ']' 00:24:41.237 17:03:30 -- common/autotest_common.sh@940 -- # kill -0 128875 00:24:41.237 17:03:30 -- common/autotest_common.sh@941 -- # uname 00:24:41.237 17:03:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:41.237 17:03:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128875 00:24:41.237 killing process with pid 128875 00:24:41.237 Received shutdown signal, test time was about 60.000000 seconds 00:24:41.237 00:24:41.237 Latency(us) 00:24:41.237 [2024-11-05T17:03:30.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.237 [2024-11-05T17:03:30.114Z] =================================================================================================================== 00:24:41.237 [2024-11-05T17:03:30.114Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:41.237 17:03:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:41.237 17:03:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:41.237 17:03:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128875' 00:24:41.237 17:03:30 -- common/autotest_common.sh@955 -- # kill 128875 00:24:41.237 17:03:30 -- common/autotest_common.sh@960 -- # wait 128875 00:24:41.237 [2024-11-05 17:03:30.076109] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:41.237 [2024-11-05 17:03:30.076183] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:41.237 [2024-11-05 17:03:30.076265] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:41.237 [2024-11-05 17:03:30.076277] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b480 name raid_bdev1, state offline 00:24:41.495 [2024-11-05 17:03:30.329913] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:42.429 ************************************ 00:24:42.429 END TEST raid5f_rebuild_test_sb 00:24:42.429 ************************************ 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:42.429 00:24:42.429 real 0m24.366s 00:24:42.429 user 0m38.092s 00:24:42.429 sys 0m2.957s 00:24:42.429 17:03:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:42.429 17:03:31 -- common/autotest_common.sh@10 -- # set +x 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:24:42.429 17:03:31 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:24:42.429 17:03:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:42.429 17:03:31 -- common/autotest_common.sh@10 -- # set +x 00:24:42.429 ************************************ 00:24:42.429 START TEST raid5f_state_function_test 00:24:42.429 ************************************ 00:24:42.429 17:03:31 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 false 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:42.429 17:03:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:42.688 17:03:31 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:42.688 17:03:31 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:42.688 17:03:31 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:42.688 17:03:31 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:42.688 17:03:31 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:42.688 17:03:31 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:42.688 17:03:31 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:42.688 17:03:31 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:42.688 17:03:31 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:42.688 17:03:31 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:24:42.688 17:03:31 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:24:42.688 17:03:31 -- bdev/bdev_raid.sh@226 -- # raid_pid=129512 00:24:42.688 17:03:31 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129512' 00:24:42.688 17:03:31 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:42.688 Process raid pid: 129512 00:24:42.688 17:03:31 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129512 /var/tmp/spdk-raid.sock 00:24:42.688 17:03:31 -- common/autotest_common.sh@829 -- # '[' -z 129512 ']' 00:24:42.688 17:03:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:42.688 17:03:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:42.688 17:03:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:42.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:42.688 17:03:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:42.688 17:03:31 -- common/autotest_common.sh@10 -- # set +x 00:24:42.688 [2024-11-05 17:03:31.402213] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:42.688 [2024-11-05 17:03:31.402653] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.688 [2024-11-05 17:03:31.576409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.946 [2024-11-05 17:03:31.740002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.204 [2024-11-05 17:03:31.913478] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:43.461 17:03:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:43.461 17:03:32 -- common/autotest_common.sh@862 -- # return 0 00:24:43.461 17:03:32 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:43.731 [2024-11-05 17:03:32.512353] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:43.731 [2024-11-05 17:03:32.512589] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:43.731 [2024-11-05 17:03:32.512717] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:43.731 [2024-11-05 17:03:32.512789] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:43.731 [2024-11-05 17:03:32.512930] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:43.731 [2024-11-05 17:03:32.513020] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:43.731 [2024-11-05 17:03:32.513222] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:43.731 [2024-11-05 17:03:32.513297] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:43.731 17:03:32 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:43.731 17:03:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:43.731 17:03:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:43.731 17:03:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:43.731 17:03:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:43.731 17:03:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:43.731 17:03:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:43.731 17:03:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:43.731 17:03:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:43.731 17:03:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:43.731 17:03:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:43.731 17:03:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.004 17:03:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:44.004 "name": "Existed_Raid", 00:24:44.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.004 "strip_size_kb": 64, 00:24:44.004 "state": "configuring", 00:24:44.004 "raid_level": "raid5f", 00:24:44.004 "superblock": false, 00:24:44.004 "num_base_bdevs": 4, 00:24:44.004 "num_base_bdevs_discovered": 0, 00:24:44.004 "num_base_bdevs_operational": 4, 00:24:44.004 "base_bdevs_list": [ 00:24:44.004 { 00:24:44.004 "name": "BaseBdev1", 00:24:44.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.004 "is_configured": false, 00:24:44.004 "data_offset": 0, 00:24:44.004 "data_size": 0 00:24:44.004 }, 00:24:44.004 { 00:24:44.004 "name": "BaseBdev2", 00:24:44.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.004 "is_configured": false, 00:24:44.004 "data_offset": 0, 00:24:44.004 "data_size": 0 00:24:44.004 }, 00:24:44.004 { 00:24:44.004 "name": "BaseBdev3", 00:24:44.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.004 "is_configured": false, 00:24:44.004 "data_offset": 0, 00:24:44.004 "data_size": 0 00:24:44.004 }, 00:24:44.004 { 00:24:44.004 "name": "BaseBdev4", 00:24:44.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.004 "is_configured": false, 00:24:44.004 "data_offset": 0, 00:24:44.004 "data_size": 0 00:24:44.004 } 00:24:44.004 ] 00:24:44.004 }' 00:24:44.004 17:03:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:44.004 17:03:32 -- common/autotest_common.sh@10 -- # set +x 00:24:44.571 17:03:33 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:44.829 [2024-11-05 17:03:33.552456] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:44.829 [2024-11-05 17:03:33.552633] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:24:44.829 17:03:33 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:45.086 [2024-11-05 17:03:33.816535] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:45.086 [2024-11-05 17:03:33.816959] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:45.086 [2024-11-05 17:03:33.817089] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:45.086 [2024-11-05 17:03:33.817164] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:45.086 [2024-11-05 17:03:33.817281] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:45.086 [2024-11-05 17:03:33.817372] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:45.086 [2024-11-05 17:03:33.817559] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:45.086 [2024-11-05 17:03:33.817631] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:45.086 17:03:33 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:45.344 [2024-11-05 17:03:34.034531] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:45.344 BaseBdev1 00:24:45.344 17:03:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:45.344 17:03:34 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:45.344 17:03:34 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:45.344 17:03:34 -- common/autotest_common.sh@899 -- # local i 00:24:45.344 17:03:34 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:45.344 17:03:34 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:45.344 17:03:34 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:45.344 17:03:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:45.602 [ 00:24:45.602 { 00:24:45.602 "name": "BaseBdev1", 00:24:45.602 "aliases": [ 00:24:45.602 "7f52f4ee-5a63-4e21-8231-97589344a027" 00:24:45.602 ], 00:24:45.602 "product_name": "Malloc disk", 00:24:45.602 "block_size": 512, 00:24:45.602 "num_blocks": 65536, 00:24:45.602 "uuid": "7f52f4ee-5a63-4e21-8231-97589344a027", 00:24:45.602 "assigned_rate_limits": { 00:24:45.602 "rw_ios_per_sec": 0, 00:24:45.602 "rw_mbytes_per_sec": 0, 00:24:45.602 "r_mbytes_per_sec": 0, 00:24:45.602 "w_mbytes_per_sec": 0 00:24:45.602 }, 00:24:45.602 "claimed": true, 00:24:45.602 "claim_type": "exclusive_write", 00:24:45.602 "zoned": false, 00:24:45.602 "supported_io_types": { 00:24:45.602 "read": true, 00:24:45.602 "write": true, 00:24:45.602 "unmap": true, 00:24:45.602 "write_zeroes": true, 00:24:45.602 "flush": true, 00:24:45.602 "reset": true, 00:24:45.602 "compare": false, 00:24:45.602 "compare_and_write": false, 00:24:45.602 "abort": true, 00:24:45.602 "nvme_admin": false, 00:24:45.602 "nvme_io": false 00:24:45.602 }, 00:24:45.602 "memory_domains": [ 00:24:45.602 { 00:24:45.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.602 "dma_device_type": 2 00:24:45.602 } 00:24:45.602 ], 00:24:45.602 "driver_specific": {} 00:24:45.602 } 00:24:45.602 ] 00:24:45.602 17:03:34 -- common/autotest_common.sh@905 -- # return 0 00:24:45.602 17:03:34 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:45.602 17:03:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:45.602 17:03:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:45.602 17:03:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:45.602 17:03:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:45.602 17:03:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:45.602 17:03:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:45.602 17:03:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:45.602 17:03:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:45.602 17:03:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:45.602 17:03:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.602 17:03:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:45.860 17:03:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:45.860 "name": "Existed_Raid", 00:24:45.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.860 "strip_size_kb": 64, 00:24:45.860 "state": "configuring", 00:24:45.860 "raid_level": "raid5f", 00:24:45.860 "superblock": false, 00:24:45.860 "num_base_bdevs": 4, 00:24:45.860 "num_base_bdevs_discovered": 1, 00:24:45.860 "num_base_bdevs_operational": 4, 00:24:45.860 "base_bdevs_list": [ 00:24:45.860 { 00:24:45.860 "name": "BaseBdev1", 00:24:45.860 "uuid": "7f52f4ee-5a63-4e21-8231-97589344a027", 00:24:45.860 "is_configured": true, 00:24:45.860 "data_offset": 0, 00:24:45.860 "data_size": 65536 00:24:45.860 }, 00:24:45.860 { 00:24:45.860 "name": "BaseBdev2", 00:24:45.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.860 "is_configured": false, 00:24:45.860 "data_offset": 0, 00:24:45.860 "data_size": 0 00:24:45.860 }, 00:24:45.860 { 00:24:45.860 "name": "BaseBdev3", 00:24:45.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.860 "is_configured": false, 00:24:45.860 "data_offset": 0, 00:24:45.860 "data_size": 0 00:24:45.860 }, 00:24:45.860 { 00:24:45.860 "name": "BaseBdev4", 00:24:45.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.860 "is_configured": false, 00:24:45.860 "data_offset": 0, 00:24:45.860 "data_size": 0 00:24:45.860 } 00:24:45.860 ] 00:24:45.860 }' 00:24:45.860 17:03:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:45.860 17:03:34 -- common/autotest_common.sh@10 -- # set +x 00:24:46.427 17:03:35 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:46.685 [2024-11-05 17:03:35.470808] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:46.685 [2024-11-05 17:03:35.471068] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:24:46.685 17:03:35 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:24:46.685 17:03:35 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:46.943 [2024-11-05 17:03:35.650913] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:46.943 [2024-11-05 17:03:35.652946] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:46.943 [2024-11-05 17:03:35.653196] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:46.943 [2024-11-05 17:03:35.653325] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:46.943 [2024-11-05 17:03:35.653414] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:46.943 [2024-11-05 17:03:35.653625] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:46.943 [2024-11-05 17:03:35.653693] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:46.943 17:03:35 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:46.943 17:03:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:46.943 17:03:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:46.943 17:03:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:46.943 17:03:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:46.943 17:03:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:46.943 17:03:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:46.943 17:03:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:46.943 17:03:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:46.943 17:03:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:46.943 17:03:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:46.943 17:03:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:46.943 17:03:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.943 17:03:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:47.201 17:03:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:47.201 "name": "Existed_Raid", 00:24:47.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.201 "strip_size_kb": 64, 00:24:47.201 "state": "configuring", 00:24:47.201 "raid_level": "raid5f", 00:24:47.201 "superblock": false, 00:24:47.201 "num_base_bdevs": 4, 00:24:47.201 "num_base_bdevs_discovered": 1, 00:24:47.201 "num_base_bdevs_operational": 4, 00:24:47.201 "base_bdevs_list": [ 00:24:47.201 { 00:24:47.201 "name": "BaseBdev1", 00:24:47.201 "uuid": "7f52f4ee-5a63-4e21-8231-97589344a027", 00:24:47.201 "is_configured": true, 00:24:47.201 "data_offset": 0, 00:24:47.201 "data_size": 65536 00:24:47.201 }, 00:24:47.201 { 00:24:47.201 "name": "BaseBdev2", 00:24:47.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.201 "is_configured": false, 00:24:47.201 "data_offset": 0, 00:24:47.201 "data_size": 0 00:24:47.201 }, 00:24:47.201 { 00:24:47.201 "name": "BaseBdev3", 00:24:47.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.201 "is_configured": false, 00:24:47.201 "data_offset": 0, 00:24:47.201 "data_size": 0 00:24:47.201 }, 00:24:47.201 { 00:24:47.201 "name": "BaseBdev4", 00:24:47.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.201 "is_configured": false, 00:24:47.201 "data_offset": 0, 00:24:47.201 "data_size": 0 00:24:47.201 } 00:24:47.201 ] 00:24:47.201 }' 00:24:47.201 17:03:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:47.201 17:03:35 -- common/autotest_common.sh@10 -- # set +x 00:24:47.766 17:03:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:48.024 [2024-11-05 17:03:36.732361] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:48.024 BaseBdev2 00:24:48.024 17:03:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:48.024 17:03:36 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:48.024 17:03:36 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:48.024 17:03:36 -- common/autotest_common.sh@899 -- # local i 00:24:48.024 17:03:36 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:48.024 17:03:36 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:48.024 17:03:36 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:48.282 17:03:36 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:48.282 [ 00:24:48.282 { 00:24:48.282 "name": "BaseBdev2", 00:24:48.282 "aliases": [ 00:24:48.282 "b15f2a95-de81-4c27-a3d2-382a7d7f7663" 00:24:48.282 ], 00:24:48.282 "product_name": "Malloc disk", 00:24:48.282 "block_size": 512, 00:24:48.282 "num_blocks": 65536, 00:24:48.282 "uuid": "b15f2a95-de81-4c27-a3d2-382a7d7f7663", 00:24:48.282 "assigned_rate_limits": { 00:24:48.282 "rw_ios_per_sec": 0, 00:24:48.282 "rw_mbytes_per_sec": 0, 00:24:48.282 "r_mbytes_per_sec": 0, 00:24:48.282 "w_mbytes_per_sec": 0 00:24:48.282 }, 00:24:48.282 "claimed": true, 00:24:48.282 "claim_type": "exclusive_write", 00:24:48.282 "zoned": false, 00:24:48.282 "supported_io_types": { 00:24:48.282 "read": true, 00:24:48.282 "write": true, 00:24:48.282 "unmap": true, 00:24:48.282 "write_zeroes": true, 00:24:48.282 "flush": true, 00:24:48.282 "reset": true, 00:24:48.282 "compare": false, 00:24:48.282 "compare_and_write": false, 00:24:48.282 "abort": true, 00:24:48.282 "nvme_admin": false, 00:24:48.282 "nvme_io": false 00:24:48.282 }, 00:24:48.282 "memory_domains": [ 00:24:48.282 { 00:24:48.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.282 "dma_device_type": 2 00:24:48.282 } 00:24:48.282 ], 00:24:48.282 "driver_specific": {} 00:24:48.282 } 00:24:48.282 ] 00:24:48.282 17:03:37 -- common/autotest_common.sh@905 -- # return 0 00:24:48.282 17:03:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:48.282 17:03:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:48.282 17:03:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:48.282 17:03:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:48.282 17:03:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:48.282 17:03:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:48.282 17:03:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:48.282 17:03:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:48.282 17:03:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:48.282 17:03:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:48.282 17:03:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:48.282 17:03:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:48.282 17:03:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.282 17:03:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:48.540 17:03:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:48.540 "name": "Existed_Raid", 00:24:48.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.540 "strip_size_kb": 64, 00:24:48.540 "state": "configuring", 00:24:48.540 "raid_level": "raid5f", 00:24:48.540 "superblock": false, 00:24:48.540 "num_base_bdevs": 4, 00:24:48.540 "num_base_bdevs_discovered": 2, 00:24:48.540 "num_base_bdevs_operational": 4, 00:24:48.540 "base_bdevs_list": [ 00:24:48.540 { 00:24:48.540 "name": "BaseBdev1", 00:24:48.540 "uuid": "7f52f4ee-5a63-4e21-8231-97589344a027", 00:24:48.540 "is_configured": true, 00:24:48.540 "data_offset": 0, 00:24:48.540 "data_size": 65536 00:24:48.540 }, 00:24:48.540 { 00:24:48.540 "name": "BaseBdev2", 00:24:48.540 "uuid": "b15f2a95-de81-4c27-a3d2-382a7d7f7663", 00:24:48.540 "is_configured": true, 00:24:48.540 "data_offset": 0, 00:24:48.540 "data_size": 65536 00:24:48.540 }, 00:24:48.540 { 00:24:48.540 "name": "BaseBdev3", 00:24:48.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.540 "is_configured": false, 00:24:48.540 "data_offset": 0, 00:24:48.540 "data_size": 0 00:24:48.540 }, 00:24:48.540 { 00:24:48.540 "name": "BaseBdev4", 00:24:48.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.540 "is_configured": false, 00:24:48.540 "data_offset": 0, 00:24:48.540 "data_size": 0 00:24:48.540 } 00:24:48.541 ] 00:24:48.541 }' 00:24:48.541 17:03:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:48.541 17:03:37 -- common/autotest_common.sh@10 -- # set +x 00:24:49.106 17:03:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:49.364 [2024-11-05 17:03:38.228714] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:49.364 BaseBdev3 00:24:49.364 17:03:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:49.364 17:03:38 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:49.364 17:03:38 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:49.364 17:03:38 -- common/autotest_common.sh@899 -- # local i 00:24:49.364 17:03:38 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:49.364 17:03:38 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:49.364 17:03:38 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:49.623 17:03:38 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:49.881 [ 00:24:49.881 { 00:24:49.881 "name": "BaseBdev3", 00:24:49.881 "aliases": [ 00:24:49.881 "c0c6d778-8381-4134-91c5-7728c185ac98" 00:24:49.881 ], 00:24:49.881 "product_name": "Malloc disk", 00:24:49.881 "block_size": 512, 00:24:49.881 "num_blocks": 65536, 00:24:49.881 "uuid": "c0c6d778-8381-4134-91c5-7728c185ac98", 00:24:49.881 "assigned_rate_limits": { 00:24:49.881 "rw_ios_per_sec": 0, 00:24:49.881 "rw_mbytes_per_sec": 0, 00:24:49.881 "r_mbytes_per_sec": 0, 00:24:49.881 "w_mbytes_per_sec": 0 00:24:49.881 }, 00:24:49.881 "claimed": true, 00:24:49.881 "claim_type": "exclusive_write", 00:24:49.881 "zoned": false, 00:24:49.881 "supported_io_types": { 00:24:49.881 "read": true, 00:24:49.881 "write": true, 00:24:49.881 "unmap": true, 00:24:49.881 "write_zeroes": true, 00:24:49.881 "flush": true, 00:24:49.881 "reset": true, 00:24:49.881 "compare": false, 00:24:49.881 "compare_and_write": false, 00:24:49.881 "abort": true, 00:24:49.881 "nvme_admin": false, 00:24:49.881 "nvme_io": false 00:24:49.881 }, 00:24:49.881 "memory_domains": [ 00:24:49.881 { 00:24:49.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.881 "dma_device_type": 2 00:24:49.881 } 00:24:49.881 ], 00:24:49.881 "driver_specific": {} 00:24:49.881 } 00:24:49.881 ] 00:24:49.881 17:03:38 -- common/autotest_common.sh@905 -- # return 0 00:24:49.881 17:03:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:49.881 17:03:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:49.881 17:03:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:49.881 17:03:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:49.881 17:03:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:49.881 17:03:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:49.881 17:03:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:49.881 17:03:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:49.881 17:03:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:49.881 17:03:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:49.881 17:03:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:49.881 17:03:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:49.881 17:03:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.881 17:03:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:50.140 17:03:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:50.140 "name": "Existed_Raid", 00:24:50.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.140 "strip_size_kb": 64, 00:24:50.140 "state": "configuring", 00:24:50.140 "raid_level": "raid5f", 00:24:50.140 "superblock": false, 00:24:50.140 "num_base_bdevs": 4, 00:24:50.140 "num_base_bdevs_discovered": 3, 00:24:50.140 "num_base_bdevs_operational": 4, 00:24:50.140 "base_bdevs_list": [ 00:24:50.140 { 00:24:50.140 "name": "BaseBdev1", 00:24:50.140 "uuid": "7f52f4ee-5a63-4e21-8231-97589344a027", 00:24:50.140 "is_configured": true, 00:24:50.140 "data_offset": 0, 00:24:50.140 "data_size": 65536 00:24:50.140 }, 00:24:50.140 { 00:24:50.140 "name": "BaseBdev2", 00:24:50.140 "uuid": "b15f2a95-de81-4c27-a3d2-382a7d7f7663", 00:24:50.140 "is_configured": true, 00:24:50.140 "data_offset": 0, 00:24:50.140 "data_size": 65536 00:24:50.140 }, 00:24:50.140 { 00:24:50.140 "name": "BaseBdev3", 00:24:50.140 "uuid": "c0c6d778-8381-4134-91c5-7728c185ac98", 00:24:50.140 "is_configured": true, 00:24:50.140 "data_offset": 0, 00:24:50.140 "data_size": 65536 00:24:50.140 }, 00:24:50.140 { 00:24:50.140 "name": "BaseBdev4", 00:24:50.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.140 "is_configured": false, 00:24:50.140 "data_offset": 0, 00:24:50.140 "data_size": 0 00:24:50.140 } 00:24:50.140 ] 00:24:50.140 }' 00:24:50.140 17:03:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:50.140 17:03:38 -- common/autotest_common.sh@10 -- # set +x 00:24:50.706 17:03:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:50.964 [2024-11-05 17:03:39.845829] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:50.964 [2024-11-05 17:03:39.846097] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:24:50.964 [2024-11-05 17:03:39.846146] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:24:50.964 [2024-11-05 17:03:39.846398] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:24:50.964 [2024-11-05 17:03:39.852292] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:24:50.964 [2024-11-05 17:03:39.852442] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:24:50.964 [2024-11-05 17:03:39.852802] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:50.964 BaseBdev4 00:24:51.221 17:03:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:24:51.221 17:03:39 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:51.221 17:03:39 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:51.222 17:03:39 -- common/autotest_common.sh@899 -- # local i 00:24:51.222 17:03:39 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:51.222 17:03:39 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:51.222 17:03:39 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:51.222 17:03:40 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:51.480 [ 00:24:51.480 { 00:24:51.480 "name": "BaseBdev4", 00:24:51.480 "aliases": [ 00:24:51.480 "4a305987-f4bc-497e-8498-dc807a481f5f" 00:24:51.480 ], 00:24:51.480 "product_name": "Malloc disk", 00:24:51.480 "block_size": 512, 00:24:51.480 "num_blocks": 65536, 00:24:51.480 "uuid": "4a305987-f4bc-497e-8498-dc807a481f5f", 00:24:51.480 "assigned_rate_limits": { 00:24:51.480 "rw_ios_per_sec": 0, 00:24:51.480 "rw_mbytes_per_sec": 0, 00:24:51.480 "r_mbytes_per_sec": 0, 00:24:51.480 "w_mbytes_per_sec": 0 00:24:51.480 }, 00:24:51.480 "claimed": true, 00:24:51.480 "claim_type": "exclusive_write", 00:24:51.480 "zoned": false, 00:24:51.480 "supported_io_types": { 00:24:51.480 "read": true, 00:24:51.480 "write": true, 00:24:51.480 "unmap": true, 00:24:51.480 "write_zeroes": true, 00:24:51.480 "flush": true, 00:24:51.480 "reset": true, 00:24:51.480 "compare": false, 00:24:51.480 "compare_and_write": false, 00:24:51.480 "abort": true, 00:24:51.480 "nvme_admin": false, 00:24:51.480 "nvme_io": false 00:24:51.480 }, 00:24:51.480 "memory_domains": [ 00:24:51.480 { 00:24:51.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:51.480 "dma_device_type": 2 00:24:51.480 } 00:24:51.480 ], 00:24:51.480 "driver_specific": {} 00:24:51.480 } 00:24:51.480 ] 00:24:51.480 17:03:40 -- common/autotest_common.sh@905 -- # return 0 00:24:51.480 17:03:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:51.480 17:03:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:51.480 17:03:40 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:51.480 17:03:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:51.480 17:03:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:51.480 17:03:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:51.480 17:03:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:51.480 17:03:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:51.480 17:03:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:51.480 17:03:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:51.480 17:03:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:51.480 17:03:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:51.480 17:03:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.480 17:03:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:51.739 17:03:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:51.739 "name": "Existed_Raid", 00:24:51.739 "uuid": "60691086-fd0e-42f2-a31e-b40078c8e233", 00:24:51.739 "strip_size_kb": 64, 00:24:51.739 "state": "online", 00:24:51.739 "raid_level": "raid5f", 00:24:51.739 "superblock": false, 00:24:51.739 "num_base_bdevs": 4, 00:24:51.739 "num_base_bdevs_discovered": 4, 00:24:51.739 "num_base_bdevs_operational": 4, 00:24:51.739 "base_bdevs_list": [ 00:24:51.739 { 00:24:51.739 "name": "BaseBdev1", 00:24:51.739 "uuid": "7f52f4ee-5a63-4e21-8231-97589344a027", 00:24:51.739 "is_configured": true, 00:24:51.739 "data_offset": 0, 00:24:51.739 "data_size": 65536 00:24:51.739 }, 00:24:51.739 { 00:24:51.739 "name": "BaseBdev2", 00:24:51.739 "uuid": "b15f2a95-de81-4c27-a3d2-382a7d7f7663", 00:24:51.739 "is_configured": true, 00:24:51.739 "data_offset": 0, 00:24:51.739 "data_size": 65536 00:24:51.739 }, 00:24:51.739 { 00:24:51.739 "name": "BaseBdev3", 00:24:51.739 "uuid": "c0c6d778-8381-4134-91c5-7728c185ac98", 00:24:51.739 "is_configured": true, 00:24:51.739 "data_offset": 0, 00:24:51.739 "data_size": 65536 00:24:51.739 }, 00:24:51.739 { 00:24:51.739 "name": "BaseBdev4", 00:24:51.739 "uuid": "4a305987-f4bc-497e-8498-dc807a481f5f", 00:24:51.739 "is_configured": true, 00:24:51.739 "data_offset": 0, 00:24:51.739 "data_size": 65536 00:24:51.739 } 00:24:51.739 ] 00:24:51.739 }' 00:24:51.739 17:03:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:51.739 17:03:40 -- common/autotest_common.sh@10 -- # set +x 00:24:52.305 17:03:41 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:52.563 [2024-11-05 17:03:41.336033] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:52.563 17:03:41 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:52.563 17:03:41 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:52.563 17:03:41 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:52.563 17:03:41 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:52.563 17:03:41 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:52.563 17:03:41 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:52.563 17:03:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:52.563 17:03:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:52.563 17:03:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:52.563 17:03:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:52.563 17:03:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:52.563 17:03:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:52.563 17:03:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:52.563 17:03:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:52.563 17:03:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:52.563 17:03:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.563 17:03:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:52.822 17:03:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:52.822 "name": "Existed_Raid", 00:24:52.822 "uuid": "60691086-fd0e-42f2-a31e-b40078c8e233", 00:24:52.822 "strip_size_kb": 64, 00:24:52.822 "state": "online", 00:24:52.822 "raid_level": "raid5f", 00:24:52.822 "superblock": false, 00:24:52.822 "num_base_bdevs": 4, 00:24:52.822 "num_base_bdevs_discovered": 3, 00:24:52.822 "num_base_bdevs_operational": 3, 00:24:52.822 "base_bdevs_list": [ 00:24:52.822 { 00:24:52.822 "name": null, 00:24:52.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.822 "is_configured": false, 00:24:52.822 "data_offset": 0, 00:24:52.822 "data_size": 65536 00:24:52.822 }, 00:24:52.822 { 00:24:52.822 "name": "BaseBdev2", 00:24:52.822 "uuid": "b15f2a95-de81-4c27-a3d2-382a7d7f7663", 00:24:52.822 "is_configured": true, 00:24:52.822 "data_offset": 0, 00:24:52.822 "data_size": 65536 00:24:52.822 }, 00:24:52.822 { 00:24:52.822 "name": "BaseBdev3", 00:24:52.822 "uuid": "c0c6d778-8381-4134-91c5-7728c185ac98", 00:24:52.822 "is_configured": true, 00:24:52.822 "data_offset": 0, 00:24:52.822 "data_size": 65536 00:24:52.822 }, 00:24:52.822 { 00:24:52.822 "name": "BaseBdev4", 00:24:52.822 "uuid": "4a305987-f4bc-497e-8498-dc807a481f5f", 00:24:52.822 "is_configured": true, 00:24:52.822 "data_offset": 0, 00:24:52.822 "data_size": 65536 00:24:52.822 } 00:24:52.822 ] 00:24:52.822 }' 00:24:52.822 17:03:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:52.822 17:03:41 -- common/autotest_common.sh@10 -- # set +x 00:24:53.388 17:03:42 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:53.389 17:03:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:53.389 17:03:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.389 17:03:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:53.647 17:03:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:53.647 17:03:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:53.647 17:03:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:53.905 [2024-11-05 17:03:42.723892] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:53.905 [2024-11-05 17:03:42.724063] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:53.905 [2024-11-05 17:03:42.724254] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:54.163 17:03:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:54.163 17:03:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:54.163 17:03:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.163 17:03:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:54.421 17:03:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:54.421 17:03:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:54.421 17:03:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:54.421 [2024-11-05 17:03:43.251245] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:54.679 17:03:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:54.679 17:03:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:54.679 17:03:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:54.679 17:03:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.679 17:03:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:54.679 17:03:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:54.679 17:03:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:54.937 [2024-11-05 17:03:43.738810] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:54.937 [2024-11-05 17:03:43.739041] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:24:54.937 17:03:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:54.937 17:03:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:54.937 17:03:43 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:54.937 17:03:43 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.194 17:03:44 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:55.194 17:03:44 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:55.194 17:03:44 -- bdev/bdev_raid.sh@287 -- # killprocess 129512 00:24:55.194 17:03:44 -- common/autotest_common.sh@936 -- # '[' -z 129512 ']' 00:24:55.194 17:03:44 -- common/autotest_common.sh@940 -- # kill -0 129512 00:24:55.194 17:03:44 -- common/autotest_common.sh@941 -- # uname 00:24:55.194 17:03:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:55.194 17:03:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129512 00:24:55.194 killing process with pid 129512 00:24:55.194 17:03:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:55.194 17:03:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:55.194 17:03:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129512' 00:24:55.194 17:03:44 -- common/autotest_common.sh@955 -- # kill 129512 00:24:55.194 17:03:44 -- common/autotest_common.sh@960 -- # wait 129512 00:24:55.194 [2024-11-05 17:03:44.067933] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:55.194 [2024-11-05 17:03:44.068028] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:56.127 ************************************ 00:24:56.127 END TEST raid5f_state_function_test 00:24:56.127 ************************************ 00:24:56.127 17:03:44 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:56.127 00:24:56.127 real 0m13.659s 00:24:56.127 user 0m24.297s 00:24:56.127 sys 0m1.670s 00:24:56.127 17:03:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:56.127 17:03:44 -- common/autotest_common.sh@10 -- # set +x 00:24:56.127 17:03:45 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:24:56.127 17:03:45 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:24:56.127 17:03:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:56.127 17:03:45 -- common/autotest_common.sh@10 -- # set +x 00:24:56.386 ************************************ 00:24:56.386 START TEST raid5f_state_function_test_sb 00:24:56.386 ************************************ 00:24:56.386 17:03:45 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 true 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@226 -- # raid_pid=129951 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:56.386 Process raid pid: 129951 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129951' 00:24:56.386 17:03:45 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129951 /var/tmp/spdk-raid.sock 00:24:56.386 17:03:45 -- common/autotest_common.sh@829 -- # '[' -z 129951 ']' 00:24:56.386 17:03:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:56.386 17:03:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.386 17:03:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:56.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:56.386 17:03:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.386 17:03:45 -- common/autotest_common.sh@10 -- # set +x 00:24:56.386 [2024-11-05 17:03:45.093668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:56.386 [2024-11-05 17:03:45.093950] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.386 [2024-11-05 17:03:45.241039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.644 [2024-11-05 17:03:45.400576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.903 [2024-11-05 17:03:45.568501] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:57.161 17:03:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.161 17:03:46 -- common/autotest_common.sh@862 -- # return 0 00:24:57.161 17:03:46 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:57.419 [2024-11-05 17:03:46.212125] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:57.419 [2024-11-05 17:03:46.212318] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:57.419 [2024-11-05 17:03:46.212451] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:57.419 [2024-11-05 17:03:46.212513] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:57.419 [2024-11-05 17:03:46.212609] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:57.419 [2024-11-05 17:03:46.212682] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:57.419 [2024-11-05 17:03:46.212804] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:57.419 [2024-11-05 17:03:46.212863] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:57.419 17:03:46 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:57.419 17:03:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:57.419 17:03:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:57.419 17:03:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:57.419 17:03:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:57.419 17:03:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:57.419 17:03:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:57.419 17:03:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:57.419 17:03:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:57.419 17:03:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:57.419 17:03:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.419 17:03:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:57.678 17:03:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:57.678 "name": "Existed_Raid", 00:24:57.678 "uuid": "3622da07-bd61-428c-8b75-90e2382b99f3", 00:24:57.678 "strip_size_kb": 64, 00:24:57.678 "state": "configuring", 00:24:57.678 "raid_level": "raid5f", 00:24:57.678 "superblock": true, 00:24:57.678 "num_base_bdevs": 4, 00:24:57.678 "num_base_bdevs_discovered": 0, 00:24:57.678 "num_base_bdevs_operational": 4, 00:24:57.678 "base_bdevs_list": [ 00:24:57.678 { 00:24:57.678 "name": "BaseBdev1", 00:24:57.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.678 "is_configured": false, 00:24:57.678 "data_offset": 0, 00:24:57.678 "data_size": 0 00:24:57.678 }, 00:24:57.678 { 00:24:57.678 "name": "BaseBdev2", 00:24:57.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.678 "is_configured": false, 00:24:57.678 "data_offset": 0, 00:24:57.678 "data_size": 0 00:24:57.678 }, 00:24:57.678 { 00:24:57.678 "name": "BaseBdev3", 00:24:57.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.678 "is_configured": false, 00:24:57.678 "data_offset": 0, 00:24:57.678 "data_size": 0 00:24:57.678 }, 00:24:57.678 { 00:24:57.678 "name": "BaseBdev4", 00:24:57.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.678 "is_configured": false, 00:24:57.678 "data_offset": 0, 00:24:57.678 "data_size": 0 00:24:57.678 } 00:24:57.678 ] 00:24:57.678 }' 00:24:57.678 17:03:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:57.678 17:03:46 -- common/autotest_common.sh@10 -- # set +x 00:24:58.244 17:03:47 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:58.502 [2024-11-05 17:03:47.232154] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:58.502 [2024-11-05 17:03:47.232518] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:24:58.502 17:03:47 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:58.761 [2024-11-05 17:03:47.412247] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:58.761 [2024-11-05 17:03:47.412424] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:58.761 [2024-11-05 17:03:47.412522] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:58.761 [2024-11-05 17:03:47.412676] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:58.761 [2024-11-05 17:03:47.412770] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:58.761 [2024-11-05 17:03:47.412929] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:58.761 [2024-11-05 17:03:47.413040] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:58.761 [2024-11-05 17:03:47.413107] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:58.761 17:03:47 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:59.020 [2024-11-05 17:03:47.686971] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:59.020 BaseBdev1 00:24:59.020 17:03:47 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:59.020 17:03:47 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:59.020 17:03:47 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:59.020 17:03:47 -- common/autotest_common.sh@899 -- # local i 00:24:59.020 17:03:47 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:59.020 17:03:47 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:59.020 17:03:47 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:59.278 17:03:47 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:59.278 [ 00:24:59.278 { 00:24:59.279 "name": "BaseBdev1", 00:24:59.279 "aliases": [ 00:24:59.279 "a676794b-ade4-418d-a690-5285e7d4492a" 00:24:59.279 ], 00:24:59.279 "product_name": "Malloc disk", 00:24:59.279 "block_size": 512, 00:24:59.279 "num_blocks": 65536, 00:24:59.279 "uuid": "a676794b-ade4-418d-a690-5285e7d4492a", 00:24:59.279 "assigned_rate_limits": { 00:24:59.279 "rw_ios_per_sec": 0, 00:24:59.279 "rw_mbytes_per_sec": 0, 00:24:59.279 "r_mbytes_per_sec": 0, 00:24:59.279 "w_mbytes_per_sec": 0 00:24:59.279 }, 00:24:59.279 "claimed": true, 00:24:59.279 "claim_type": "exclusive_write", 00:24:59.279 "zoned": false, 00:24:59.279 "supported_io_types": { 00:24:59.279 "read": true, 00:24:59.279 "write": true, 00:24:59.279 "unmap": true, 00:24:59.279 "write_zeroes": true, 00:24:59.279 "flush": true, 00:24:59.279 "reset": true, 00:24:59.279 "compare": false, 00:24:59.279 "compare_and_write": false, 00:24:59.279 "abort": true, 00:24:59.279 "nvme_admin": false, 00:24:59.279 "nvme_io": false 00:24:59.279 }, 00:24:59.279 "memory_domains": [ 00:24:59.279 { 00:24:59.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.279 "dma_device_type": 2 00:24:59.279 } 00:24:59.279 ], 00:24:59.279 "driver_specific": {} 00:24:59.279 } 00:24:59.279 ] 00:24:59.279 17:03:48 -- common/autotest_common.sh@905 -- # return 0 00:24:59.279 17:03:48 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:59.279 17:03:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:59.279 17:03:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:59.279 17:03:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:59.279 17:03:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:59.279 17:03:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:59.279 17:03:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:59.279 17:03:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:59.279 17:03:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:59.279 17:03:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:59.279 17:03:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.279 17:03:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:59.537 17:03:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:59.537 "name": "Existed_Raid", 00:24:59.537 "uuid": "4990eb96-66f1-4997-b750-4554e4a58fb5", 00:24:59.537 "strip_size_kb": 64, 00:24:59.537 "state": "configuring", 00:24:59.537 "raid_level": "raid5f", 00:24:59.537 "superblock": true, 00:24:59.537 "num_base_bdevs": 4, 00:24:59.537 "num_base_bdevs_discovered": 1, 00:24:59.537 "num_base_bdevs_operational": 4, 00:24:59.537 "base_bdevs_list": [ 00:24:59.537 { 00:24:59.537 "name": "BaseBdev1", 00:24:59.537 "uuid": "a676794b-ade4-418d-a690-5285e7d4492a", 00:24:59.537 "is_configured": true, 00:24:59.537 "data_offset": 2048, 00:24:59.537 "data_size": 63488 00:24:59.537 }, 00:24:59.537 { 00:24:59.537 "name": "BaseBdev2", 00:24:59.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.537 "is_configured": false, 00:24:59.537 "data_offset": 0, 00:24:59.537 "data_size": 0 00:24:59.537 }, 00:24:59.537 { 00:24:59.537 "name": "BaseBdev3", 00:24:59.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.537 "is_configured": false, 00:24:59.537 "data_offset": 0, 00:24:59.537 "data_size": 0 00:24:59.537 }, 00:24:59.537 { 00:24:59.537 "name": "BaseBdev4", 00:24:59.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.537 "is_configured": false, 00:24:59.537 "data_offset": 0, 00:24:59.537 "data_size": 0 00:24:59.537 } 00:24:59.537 ] 00:24:59.537 }' 00:24:59.537 17:03:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:59.537 17:03:48 -- common/autotest_common.sh@10 -- # set +x 00:25:00.110 17:03:48 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:00.399 [2024-11-05 17:03:49.115390] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:00.399 [2024-11-05 17:03:49.115587] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:25:00.399 17:03:49 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:25:00.399 17:03:49 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:00.657 17:03:49 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:00.915 BaseBdev1 00:25:00.915 17:03:49 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:25:00.915 17:03:49 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:00.915 17:03:49 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:00.915 17:03:49 -- common/autotest_common.sh@899 -- # local i 00:25:00.915 17:03:49 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:00.915 17:03:49 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:00.915 17:03:49 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:01.173 17:03:49 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:01.431 [ 00:25:01.431 { 00:25:01.431 "name": "BaseBdev1", 00:25:01.431 "aliases": [ 00:25:01.431 "105fd871-0c21-420d-9008-51af9d7b4af8" 00:25:01.431 ], 00:25:01.431 "product_name": "Malloc disk", 00:25:01.431 "block_size": 512, 00:25:01.431 "num_blocks": 65536, 00:25:01.431 "uuid": "105fd871-0c21-420d-9008-51af9d7b4af8", 00:25:01.431 "assigned_rate_limits": { 00:25:01.431 "rw_ios_per_sec": 0, 00:25:01.431 "rw_mbytes_per_sec": 0, 00:25:01.431 "r_mbytes_per_sec": 0, 00:25:01.431 "w_mbytes_per_sec": 0 00:25:01.431 }, 00:25:01.431 "claimed": false, 00:25:01.431 "zoned": false, 00:25:01.431 "supported_io_types": { 00:25:01.431 "read": true, 00:25:01.431 "write": true, 00:25:01.431 "unmap": true, 00:25:01.431 "write_zeroes": true, 00:25:01.431 "flush": true, 00:25:01.431 "reset": true, 00:25:01.431 "compare": false, 00:25:01.431 "compare_and_write": false, 00:25:01.431 "abort": true, 00:25:01.431 "nvme_admin": false, 00:25:01.431 "nvme_io": false 00:25:01.431 }, 00:25:01.431 "memory_domains": [ 00:25:01.431 { 00:25:01.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.431 "dma_device_type": 2 00:25:01.431 } 00:25:01.431 ], 00:25:01.431 "driver_specific": {} 00:25:01.431 } 00:25:01.431 ] 00:25:01.431 17:03:50 -- common/autotest_common.sh@905 -- # return 0 00:25:01.431 17:03:50 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:01.689 [2024-11-05 17:03:50.404933] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:01.689 [2024-11-05 17:03:50.407136] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:01.689 [2024-11-05 17:03:50.407327] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:01.689 [2024-11-05 17:03:50.407442] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:01.689 [2024-11-05 17:03:50.407506] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:01.689 [2024-11-05 17:03:50.407599] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:01.689 [2024-11-05 17:03:50.407655] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:01.690 17:03:50 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:25:01.690 17:03:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:01.690 17:03:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:01.690 17:03:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:01.690 17:03:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:01.690 17:03:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:01.690 17:03:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:01.690 17:03:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:01.690 17:03:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:01.690 17:03:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:01.690 17:03:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:01.690 17:03:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:01.690 17:03:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.690 17:03:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:01.948 17:03:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:01.948 "name": "Existed_Raid", 00:25:01.948 "uuid": "e3805977-cf3b-44cb-b624-f40015b7a92f", 00:25:01.948 "strip_size_kb": 64, 00:25:01.948 "state": "configuring", 00:25:01.948 "raid_level": "raid5f", 00:25:01.948 "superblock": true, 00:25:01.948 "num_base_bdevs": 4, 00:25:01.948 "num_base_bdevs_discovered": 1, 00:25:01.948 "num_base_bdevs_operational": 4, 00:25:01.948 "base_bdevs_list": [ 00:25:01.948 { 00:25:01.948 "name": "BaseBdev1", 00:25:01.948 "uuid": "105fd871-0c21-420d-9008-51af9d7b4af8", 00:25:01.948 "is_configured": true, 00:25:01.948 "data_offset": 2048, 00:25:01.948 "data_size": 63488 00:25:01.948 }, 00:25:01.948 { 00:25:01.948 "name": "BaseBdev2", 00:25:01.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.948 "is_configured": false, 00:25:01.948 "data_offset": 0, 00:25:01.948 "data_size": 0 00:25:01.948 }, 00:25:01.948 { 00:25:01.948 "name": "BaseBdev3", 00:25:01.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.948 "is_configured": false, 00:25:01.948 "data_offset": 0, 00:25:01.948 "data_size": 0 00:25:01.948 }, 00:25:01.948 { 00:25:01.948 "name": "BaseBdev4", 00:25:01.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.948 "is_configured": false, 00:25:01.948 "data_offset": 0, 00:25:01.948 "data_size": 0 00:25:01.948 } 00:25:01.948 ] 00:25:01.948 }' 00:25:01.948 17:03:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:01.948 17:03:50 -- common/autotest_common.sh@10 -- # set +x 00:25:02.514 17:03:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:02.772 [2024-11-05 17:03:51.530406] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:02.772 BaseBdev2 00:25:02.772 17:03:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:25:02.772 17:03:51 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:02.772 17:03:51 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:02.772 17:03:51 -- common/autotest_common.sh@899 -- # local i 00:25:02.772 17:03:51 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:02.772 17:03:51 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:02.772 17:03:51 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:03.030 17:03:51 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:03.289 [ 00:25:03.289 { 00:25:03.289 "name": "BaseBdev2", 00:25:03.289 "aliases": [ 00:25:03.289 "80199f3f-3b47-4ba1-a965-f3d3940e4983" 00:25:03.289 ], 00:25:03.289 "product_name": "Malloc disk", 00:25:03.289 "block_size": 512, 00:25:03.289 "num_blocks": 65536, 00:25:03.289 "uuid": "80199f3f-3b47-4ba1-a965-f3d3940e4983", 00:25:03.289 "assigned_rate_limits": { 00:25:03.289 "rw_ios_per_sec": 0, 00:25:03.289 "rw_mbytes_per_sec": 0, 00:25:03.289 "r_mbytes_per_sec": 0, 00:25:03.289 "w_mbytes_per_sec": 0 00:25:03.289 }, 00:25:03.289 "claimed": true, 00:25:03.289 "claim_type": "exclusive_write", 00:25:03.289 "zoned": false, 00:25:03.289 "supported_io_types": { 00:25:03.289 "read": true, 00:25:03.289 "write": true, 00:25:03.289 "unmap": true, 00:25:03.289 "write_zeroes": true, 00:25:03.289 "flush": true, 00:25:03.289 "reset": true, 00:25:03.289 "compare": false, 00:25:03.289 "compare_and_write": false, 00:25:03.289 "abort": true, 00:25:03.289 "nvme_admin": false, 00:25:03.289 "nvme_io": false 00:25:03.289 }, 00:25:03.289 "memory_domains": [ 00:25:03.289 { 00:25:03.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.289 "dma_device_type": 2 00:25:03.289 } 00:25:03.289 ], 00:25:03.289 "driver_specific": {} 00:25:03.289 } 00:25:03.289 ] 00:25:03.289 17:03:52 -- common/autotest_common.sh@905 -- # return 0 00:25:03.289 17:03:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:03.289 17:03:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:03.289 17:03:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:03.289 17:03:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:03.289 17:03:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:03.289 17:03:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:03.289 17:03:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:03.289 17:03:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:03.289 17:03:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:03.289 17:03:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:03.289 17:03:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:03.289 17:03:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:03.289 17:03:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.289 17:03:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:03.547 17:03:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:03.547 "name": "Existed_Raid", 00:25:03.547 "uuid": "e3805977-cf3b-44cb-b624-f40015b7a92f", 00:25:03.547 "strip_size_kb": 64, 00:25:03.547 "state": "configuring", 00:25:03.547 "raid_level": "raid5f", 00:25:03.547 "superblock": true, 00:25:03.547 "num_base_bdevs": 4, 00:25:03.547 "num_base_bdevs_discovered": 2, 00:25:03.547 "num_base_bdevs_operational": 4, 00:25:03.547 "base_bdevs_list": [ 00:25:03.547 { 00:25:03.547 "name": "BaseBdev1", 00:25:03.547 "uuid": "105fd871-0c21-420d-9008-51af9d7b4af8", 00:25:03.547 "is_configured": true, 00:25:03.547 "data_offset": 2048, 00:25:03.547 "data_size": 63488 00:25:03.547 }, 00:25:03.547 { 00:25:03.547 "name": "BaseBdev2", 00:25:03.547 "uuid": "80199f3f-3b47-4ba1-a965-f3d3940e4983", 00:25:03.547 "is_configured": true, 00:25:03.547 "data_offset": 2048, 00:25:03.547 "data_size": 63488 00:25:03.547 }, 00:25:03.547 { 00:25:03.547 "name": "BaseBdev3", 00:25:03.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.547 "is_configured": false, 00:25:03.547 "data_offset": 0, 00:25:03.547 "data_size": 0 00:25:03.547 }, 00:25:03.547 { 00:25:03.547 "name": "BaseBdev4", 00:25:03.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.547 "is_configured": false, 00:25:03.547 "data_offset": 0, 00:25:03.547 "data_size": 0 00:25:03.547 } 00:25:03.547 ] 00:25:03.547 }' 00:25:03.547 17:03:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:03.547 17:03:52 -- common/autotest_common.sh@10 -- # set +x 00:25:04.113 17:03:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:04.370 [2024-11-05 17:03:53.131099] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:04.371 BaseBdev3 00:25:04.371 17:03:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:25:04.371 17:03:53 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:04.371 17:03:53 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:04.371 17:03:53 -- common/autotest_common.sh@899 -- # local i 00:25:04.371 17:03:53 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:04.371 17:03:53 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:04.371 17:03:53 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:04.628 17:03:53 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:04.886 [ 00:25:04.886 { 00:25:04.886 "name": "BaseBdev3", 00:25:04.886 "aliases": [ 00:25:04.886 "d39dcdb3-3b90-4895-b1ab-67e98152273f" 00:25:04.886 ], 00:25:04.886 "product_name": "Malloc disk", 00:25:04.886 "block_size": 512, 00:25:04.886 "num_blocks": 65536, 00:25:04.886 "uuid": "d39dcdb3-3b90-4895-b1ab-67e98152273f", 00:25:04.886 "assigned_rate_limits": { 00:25:04.886 "rw_ios_per_sec": 0, 00:25:04.886 "rw_mbytes_per_sec": 0, 00:25:04.886 "r_mbytes_per_sec": 0, 00:25:04.886 "w_mbytes_per_sec": 0 00:25:04.886 }, 00:25:04.886 "claimed": true, 00:25:04.886 "claim_type": "exclusive_write", 00:25:04.886 "zoned": false, 00:25:04.886 "supported_io_types": { 00:25:04.886 "read": true, 00:25:04.886 "write": true, 00:25:04.886 "unmap": true, 00:25:04.886 "write_zeroes": true, 00:25:04.886 "flush": true, 00:25:04.886 "reset": true, 00:25:04.886 "compare": false, 00:25:04.886 "compare_and_write": false, 00:25:04.886 "abort": true, 00:25:04.886 "nvme_admin": false, 00:25:04.886 "nvme_io": false 00:25:04.886 }, 00:25:04.886 "memory_domains": [ 00:25:04.886 { 00:25:04.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.886 "dma_device_type": 2 00:25:04.886 } 00:25:04.886 ], 00:25:04.886 "driver_specific": {} 00:25:04.886 } 00:25:04.886 ] 00:25:04.886 17:03:53 -- common/autotest_common.sh@905 -- # return 0 00:25:04.886 17:03:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:04.886 17:03:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:04.886 17:03:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:04.886 17:03:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:04.886 17:03:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:04.886 17:03:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:04.886 17:03:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:04.886 17:03:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:04.886 17:03:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:04.886 17:03:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:04.886 17:03:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:04.886 17:03:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:04.886 17:03:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.886 17:03:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:05.144 17:03:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:05.144 "name": "Existed_Raid", 00:25:05.144 "uuid": "e3805977-cf3b-44cb-b624-f40015b7a92f", 00:25:05.144 "strip_size_kb": 64, 00:25:05.144 "state": "configuring", 00:25:05.144 "raid_level": "raid5f", 00:25:05.144 "superblock": true, 00:25:05.144 "num_base_bdevs": 4, 00:25:05.144 "num_base_bdevs_discovered": 3, 00:25:05.144 "num_base_bdevs_operational": 4, 00:25:05.144 "base_bdevs_list": [ 00:25:05.144 { 00:25:05.144 "name": "BaseBdev1", 00:25:05.144 "uuid": "105fd871-0c21-420d-9008-51af9d7b4af8", 00:25:05.144 "is_configured": true, 00:25:05.144 "data_offset": 2048, 00:25:05.144 "data_size": 63488 00:25:05.144 }, 00:25:05.144 { 00:25:05.144 "name": "BaseBdev2", 00:25:05.144 "uuid": "80199f3f-3b47-4ba1-a965-f3d3940e4983", 00:25:05.144 "is_configured": true, 00:25:05.144 "data_offset": 2048, 00:25:05.144 "data_size": 63488 00:25:05.144 }, 00:25:05.144 { 00:25:05.144 "name": "BaseBdev3", 00:25:05.144 "uuid": "d39dcdb3-3b90-4895-b1ab-67e98152273f", 00:25:05.144 "is_configured": true, 00:25:05.144 "data_offset": 2048, 00:25:05.144 "data_size": 63488 00:25:05.144 }, 00:25:05.144 { 00:25:05.144 "name": "BaseBdev4", 00:25:05.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.144 "is_configured": false, 00:25:05.144 "data_offset": 0, 00:25:05.144 "data_size": 0 00:25:05.144 } 00:25:05.144 ] 00:25:05.144 }' 00:25:05.144 17:03:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:05.144 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:25:05.709 17:03:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:05.967 [2024-11-05 17:03:54.725521] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:05.967 [2024-11-05 17:03:54.726050] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:25:05.967 [2024-11-05 17:03:54.726184] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:05.967 [2024-11-05 17:03:54.726353] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:25:05.967 BaseBdev4 00:25:05.967 [2024-11-05 17:03:54.732209] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:25:05.967 [2024-11-05 17:03:54.732378] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:25:05.967 [2024-11-05 17:03:54.732676] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:05.967 17:03:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:25:05.967 17:03:54 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:05.967 17:03:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:05.967 17:03:54 -- common/autotest_common.sh@899 -- # local i 00:25:05.967 17:03:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:05.967 17:03:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:05.967 17:03:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:06.225 17:03:54 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:06.225 [ 00:25:06.225 { 00:25:06.225 "name": "BaseBdev4", 00:25:06.225 "aliases": [ 00:25:06.225 "2774d4e5-2209-4fee-905b-9209e4cd4640" 00:25:06.225 ], 00:25:06.225 "product_name": "Malloc disk", 00:25:06.225 "block_size": 512, 00:25:06.225 "num_blocks": 65536, 00:25:06.225 "uuid": "2774d4e5-2209-4fee-905b-9209e4cd4640", 00:25:06.225 "assigned_rate_limits": { 00:25:06.225 "rw_ios_per_sec": 0, 00:25:06.225 "rw_mbytes_per_sec": 0, 00:25:06.225 "r_mbytes_per_sec": 0, 00:25:06.225 "w_mbytes_per_sec": 0 00:25:06.225 }, 00:25:06.225 "claimed": true, 00:25:06.225 "claim_type": "exclusive_write", 00:25:06.225 "zoned": false, 00:25:06.225 "supported_io_types": { 00:25:06.225 "read": true, 00:25:06.225 "write": true, 00:25:06.225 "unmap": true, 00:25:06.225 "write_zeroes": true, 00:25:06.225 "flush": true, 00:25:06.225 "reset": true, 00:25:06.225 "compare": false, 00:25:06.225 "compare_and_write": false, 00:25:06.225 "abort": true, 00:25:06.225 "nvme_admin": false, 00:25:06.225 "nvme_io": false 00:25:06.225 }, 00:25:06.225 "memory_domains": [ 00:25:06.225 { 00:25:06.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.225 "dma_device_type": 2 00:25:06.225 } 00:25:06.225 ], 00:25:06.225 "driver_specific": {} 00:25:06.225 } 00:25:06.225 ] 00:25:06.225 17:03:55 -- common/autotest_common.sh@905 -- # return 0 00:25:06.225 17:03:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:06.225 17:03:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:06.225 17:03:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:25:06.225 17:03:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:06.225 17:03:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:06.225 17:03:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:06.225 17:03:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:06.225 17:03:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:06.225 17:03:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:06.225 17:03:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:06.225 17:03:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:06.225 17:03:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:06.225 17:03:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.225 17:03:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:06.482 17:03:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:06.482 "name": "Existed_Raid", 00:25:06.482 "uuid": "e3805977-cf3b-44cb-b624-f40015b7a92f", 00:25:06.482 "strip_size_kb": 64, 00:25:06.482 "state": "online", 00:25:06.482 "raid_level": "raid5f", 00:25:06.482 "superblock": true, 00:25:06.482 "num_base_bdevs": 4, 00:25:06.482 "num_base_bdevs_discovered": 4, 00:25:06.482 "num_base_bdevs_operational": 4, 00:25:06.482 "base_bdevs_list": [ 00:25:06.482 { 00:25:06.482 "name": "BaseBdev1", 00:25:06.482 "uuid": "105fd871-0c21-420d-9008-51af9d7b4af8", 00:25:06.482 "is_configured": true, 00:25:06.482 "data_offset": 2048, 00:25:06.482 "data_size": 63488 00:25:06.482 }, 00:25:06.482 { 00:25:06.482 "name": "BaseBdev2", 00:25:06.482 "uuid": "80199f3f-3b47-4ba1-a965-f3d3940e4983", 00:25:06.482 "is_configured": true, 00:25:06.482 "data_offset": 2048, 00:25:06.482 "data_size": 63488 00:25:06.482 }, 00:25:06.482 { 00:25:06.482 "name": "BaseBdev3", 00:25:06.482 "uuid": "d39dcdb3-3b90-4895-b1ab-67e98152273f", 00:25:06.482 "is_configured": true, 00:25:06.482 "data_offset": 2048, 00:25:06.482 "data_size": 63488 00:25:06.482 }, 00:25:06.482 { 00:25:06.482 "name": "BaseBdev4", 00:25:06.482 "uuid": "2774d4e5-2209-4fee-905b-9209e4cd4640", 00:25:06.482 "is_configured": true, 00:25:06.482 "data_offset": 2048, 00:25:06.482 "data_size": 63488 00:25:06.482 } 00:25:06.482 ] 00:25:06.482 }' 00:25:06.482 17:03:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:06.482 17:03:55 -- common/autotest_common.sh@10 -- # set +x 00:25:07.047 17:03:55 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:07.304 [2024-11-05 17:03:56.022960] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:07.304 17:03:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:25:07.304 17:03:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:25:07.304 17:03:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:07.304 17:03:56 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:07.304 17:03:56 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:25:07.304 17:03:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:07.304 17:03:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:07.304 17:03:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:07.304 17:03:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:07.304 17:03:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:07.304 17:03:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:07.304 17:03:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:07.304 17:03:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:07.304 17:03:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:07.304 17:03:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:07.304 17:03:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.304 17:03:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.563 17:03:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:07.563 "name": "Existed_Raid", 00:25:07.563 "uuid": "e3805977-cf3b-44cb-b624-f40015b7a92f", 00:25:07.563 "strip_size_kb": 64, 00:25:07.563 "state": "online", 00:25:07.563 "raid_level": "raid5f", 00:25:07.563 "superblock": true, 00:25:07.563 "num_base_bdevs": 4, 00:25:07.563 "num_base_bdevs_discovered": 3, 00:25:07.563 "num_base_bdevs_operational": 3, 00:25:07.563 "base_bdevs_list": [ 00:25:07.563 { 00:25:07.563 "name": null, 00:25:07.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.563 "is_configured": false, 00:25:07.563 "data_offset": 2048, 00:25:07.563 "data_size": 63488 00:25:07.563 }, 00:25:07.563 { 00:25:07.563 "name": "BaseBdev2", 00:25:07.563 "uuid": "80199f3f-3b47-4ba1-a965-f3d3940e4983", 00:25:07.563 "is_configured": true, 00:25:07.563 "data_offset": 2048, 00:25:07.563 "data_size": 63488 00:25:07.563 }, 00:25:07.563 { 00:25:07.563 "name": "BaseBdev3", 00:25:07.563 "uuid": "d39dcdb3-3b90-4895-b1ab-67e98152273f", 00:25:07.563 "is_configured": true, 00:25:07.563 "data_offset": 2048, 00:25:07.563 "data_size": 63488 00:25:07.563 }, 00:25:07.563 { 00:25:07.563 "name": "BaseBdev4", 00:25:07.563 "uuid": "2774d4e5-2209-4fee-905b-9209e4cd4640", 00:25:07.563 "is_configured": true, 00:25:07.563 "data_offset": 2048, 00:25:07.563 "data_size": 63488 00:25:07.563 } 00:25:07.563 ] 00:25:07.563 }' 00:25:07.563 17:03:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:07.563 17:03:56 -- common/autotest_common.sh@10 -- # set +x 00:25:08.128 17:03:56 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:25:08.129 17:03:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:08.129 17:03:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.129 17:03:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:08.386 17:03:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:08.386 17:03:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:08.386 17:03:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:08.645 [2024-11-05 17:03:57.355544] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:08.645 [2024-11-05 17:03:57.355734] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:08.645 [2024-11-05 17:03:57.355962] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:08.645 17:03:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:08.645 17:03:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:08.645 17:03:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.645 17:03:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:08.903 17:03:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:08.903 17:03:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:08.903 17:03:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:09.161 [2024-11-05 17:03:57.938128] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:09.161 17:03:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:09.161 17:03:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:09.161 17:03:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.161 17:03:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:09.419 17:03:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:09.419 17:03:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:09.419 17:03:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:09.677 [2024-11-05 17:03:58.382313] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:09.677 [2024-11-05 17:03:58.382534] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:25:09.677 17:03:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:09.677 17:03:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:09.677 17:03:58 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.677 17:03:58 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:25:09.935 17:03:58 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:25:09.935 17:03:58 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:25:09.935 17:03:58 -- bdev/bdev_raid.sh@287 -- # killprocess 129951 00:25:09.935 17:03:58 -- common/autotest_common.sh@936 -- # '[' -z 129951 ']' 00:25:09.935 17:03:58 -- common/autotest_common.sh@940 -- # kill -0 129951 00:25:09.935 17:03:58 -- common/autotest_common.sh@941 -- # uname 00:25:09.935 17:03:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:09.935 17:03:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129951 00:25:09.935 killing process with pid 129951 00:25:09.935 17:03:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:09.935 17:03:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:09.935 17:03:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129951' 00:25:09.935 17:03:58 -- common/autotest_common.sh@955 -- # kill 129951 00:25:09.935 [2024-11-05 17:03:58.675919] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:09.935 17:03:58 -- common/autotest_common.sh@960 -- # wait 129951 00:25:09.936 [2024-11-05 17:03:58.676029] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:10.870 ************************************ 00:25:10.870 END TEST raid5f_state_function_test_sb 00:25:10.870 ************************************ 00:25:10.870 17:03:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:25:10.870 00:25:10.870 real 0m14.681s 00:25:10.870 user 0m26.209s 00:25:10.870 sys 0m1.602s 00:25:10.870 17:03:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:10.870 17:03:59 -- common/autotest_common.sh@10 -- # set +x 00:25:10.870 17:03:59 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:25:10.870 17:03:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:25:10.870 17:03:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:10.870 17:03:59 -- common/autotest_common.sh@10 -- # set +x 00:25:11.128 ************************************ 00:25:11.128 START TEST raid5f_superblock_test 00:25:11.128 ************************************ 00:25:11.128 17:03:59 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 4 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@357 -- # raid_pid=130393 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:11.128 17:03:59 -- bdev/bdev_raid.sh@358 -- # waitforlisten 130393 /var/tmp/spdk-raid.sock 00:25:11.128 17:03:59 -- common/autotest_common.sh@829 -- # '[' -z 130393 ']' 00:25:11.128 17:03:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:11.128 17:03:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:11.128 17:03:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:11.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:11.128 17:03:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:11.129 17:03:59 -- common/autotest_common.sh@10 -- # set +x 00:25:11.129 [2024-11-05 17:03:59.855504] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:11.129 [2024-11-05 17:03:59.855945] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130393 ] 00:25:11.387 [2024-11-05 17:04:00.029044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.387 [2024-11-05 17:04:00.221351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.645 [2024-11-05 17:04:00.384706] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:11.902 17:04:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:11.903 17:04:00 -- common/autotest_common.sh@862 -- # return 0 00:25:11.903 17:04:00 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:25:11.903 17:04:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:11.903 17:04:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:25:11.903 17:04:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:25:11.903 17:04:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:11.903 17:04:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:11.903 17:04:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:11.903 17:04:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:11.903 17:04:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:12.160 malloc1 00:25:12.160 17:04:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:12.418 [2024-11-05 17:04:01.174135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:12.418 [2024-11-05 17:04:01.174361] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:12.418 [2024-11-05 17:04:01.174429] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:25:12.418 [2024-11-05 17:04:01.174592] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:12.418 [2024-11-05 17:04:01.176750] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:12.418 [2024-11-05 17:04:01.176917] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:12.418 pt1 00:25:12.418 17:04:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:12.418 17:04:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:12.418 17:04:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:25:12.418 17:04:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:25:12.418 17:04:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:12.418 17:04:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:12.418 17:04:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:12.418 17:04:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:12.418 17:04:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:12.676 malloc2 00:25:12.676 17:04:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:12.933 [2024-11-05 17:04:01.645419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:12.933 [2024-11-05 17:04:01.646790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:12.933 [2024-11-05 17:04:01.646905] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:12.933 [2024-11-05 17:04:01.647066] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:12.933 [2024-11-05 17:04:01.649258] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:12.933 [2024-11-05 17:04:01.649466] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:12.933 pt2 00:25:12.933 17:04:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:12.933 17:04:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:12.933 17:04:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:25:12.933 17:04:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:25:12.933 17:04:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:12.933 17:04:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:12.933 17:04:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:12.933 17:04:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:12.933 17:04:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:25:13.191 malloc3 00:25:13.191 17:04:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:13.191 [2024-11-05 17:04:02.072615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:13.191 [2024-11-05 17:04:02.072830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.191 [2024-11-05 17:04:02.072911] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:13.191 [2024-11-05 17:04:02.073143] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.191 [2024-11-05 17:04:02.075410] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.191 [2024-11-05 17:04:02.075594] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:13.191 pt3 00:25:13.191 17:04:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:13.191 17:04:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:13.449 17:04:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:25:13.449 17:04:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:25:13.449 17:04:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:25:13.449 17:04:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:13.449 17:04:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:13.449 17:04:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:13.449 17:04:02 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:25:13.449 malloc4 00:25:13.449 17:04:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:13.707 [2024-11-05 17:04:02.497791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:13.707 [2024-11-05 17:04:02.498011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.707 [2024-11-05 17:04:02.498083] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:25:13.707 [2024-11-05 17:04:02.498343] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.707 [2024-11-05 17:04:02.500564] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.707 [2024-11-05 17:04:02.500747] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:13.707 pt4 00:25:13.707 17:04:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:13.707 17:04:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:13.707 17:04:02 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:25:13.965 [2024-11-05 17:04:02.693903] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:13.965 [2024-11-05 17:04:02.695803] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:13.965 [2024-11-05 17:04:02.695994] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:13.965 [2024-11-05 17:04:02.696113] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:13.965 [2024-11-05 17:04:02.696376] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:25:13.965 [2024-11-05 17:04:02.696493] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:13.965 [2024-11-05 17:04:02.696620] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:25:13.965 [2024-11-05 17:04:02.702154] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:25:13.965 [2024-11-05 17:04:02.702275] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:25:13.965 [2024-11-05 17:04:02.702570] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:13.965 17:04:02 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:13.965 17:04:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:13.965 17:04:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:13.965 17:04:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:13.965 17:04:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:13.965 17:04:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:13.965 17:04:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:13.965 17:04:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:13.965 17:04:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:13.965 17:04:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:13.965 17:04:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.965 17:04:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.223 17:04:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:14.223 "name": "raid_bdev1", 00:25:14.223 "uuid": "915ad24d-eb96-4c53-84ff-99579bafd0c3", 00:25:14.223 "strip_size_kb": 64, 00:25:14.223 "state": "online", 00:25:14.223 "raid_level": "raid5f", 00:25:14.223 "superblock": true, 00:25:14.223 "num_base_bdevs": 4, 00:25:14.223 "num_base_bdevs_discovered": 4, 00:25:14.223 "num_base_bdevs_operational": 4, 00:25:14.223 "base_bdevs_list": [ 00:25:14.223 { 00:25:14.223 "name": "pt1", 00:25:14.223 "uuid": "d6a58213-2516-5b34-817d-fb7f13944714", 00:25:14.223 "is_configured": true, 00:25:14.223 "data_offset": 2048, 00:25:14.223 "data_size": 63488 00:25:14.223 }, 00:25:14.223 { 00:25:14.223 "name": "pt2", 00:25:14.223 "uuid": "cacaacae-7bed-54bd-8765-8786a58ed2f1", 00:25:14.223 "is_configured": true, 00:25:14.223 "data_offset": 2048, 00:25:14.223 "data_size": 63488 00:25:14.223 }, 00:25:14.223 { 00:25:14.223 "name": "pt3", 00:25:14.223 "uuid": "0f7ba813-89f4-58e5-af8a-092c9f82551d", 00:25:14.223 "is_configured": true, 00:25:14.223 "data_offset": 2048, 00:25:14.223 "data_size": 63488 00:25:14.223 }, 00:25:14.223 { 00:25:14.224 "name": "pt4", 00:25:14.224 "uuid": "dc09cd11-9505-5ddb-9b6a-0da62ca0e2b6", 00:25:14.224 "is_configured": true, 00:25:14.224 "data_offset": 2048, 00:25:14.224 "data_size": 63488 00:25:14.224 } 00:25:14.224 ] 00:25:14.224 }' 00:25:14.224 17:04:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:14.224 17:04:02 -- common/autotest_common.sh@10 -- # set +x 00:25:14.790 17:04:03 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:14.790 17:04:03 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:25:15.047 [2024-11-05 17:04:03.717056] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:15.047 17:04:03 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=915ad24d-eb96-4c53-84ff-99579bafd0c3 00:25:15.047 17:04:03 -- bdev/bdev_raid.sh@380 -- # '[' -z 915ad24d-eb96-4c53-84ff-99579bafd0c3 ']' 00:25:15.047 17:04:03 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:15.047 [2024-11-05 17:04:03.904957] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:15.047 [2024-11-05 17:04:03.905089] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:15.047 [2024-11-05 17:04:03.905251] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:15.047 [2024-11-05 17:04:03.905442] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:15.047 [2024-11-05 17:04:03.905556] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:25:15.047 17:04:03 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:25:15.048 17:04:03 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.306 17:04:04 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:25:15.306 17:04:04 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:25:15.306 17:04:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:15.306 17:04:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:15.564 17:04:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:15.564 17:04:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:15.821 17:04:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:15.821 17:04:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:16.080 17:04:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:16.080 17:04:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:16.337 17:04:04 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:16.337 17:04:04 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:16.337 17:04:05 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:25:16.337 17:04:05 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:16.337 17:04:05 -- common/autotest_common.sh@650 -- # local es=0 00:25:16.337 17:04:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:16.337 17:04:05 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:16.337 17:04:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:16.337 17:04:05 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:16.337 17:04:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:16.337 17:04:05 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:16.337 17:04:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:16.337 17:04:05 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:16.337 17:04:05 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:16.337 17:04:05 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:16.601 [2024-11-05 17:04:05.409181] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:16.601 [2024-11-05 17:04:05.411103] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:16.601 [2024-11-05 17:04:05.411299] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:16.601 [2024-11-05 17:04:05.411451] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:16.601 [2024-11-05 17:04:05.411596] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:25:16.601 [2024-11-05 17:04:05.411756] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:25:16.601 [2024-11-05 17:04:05.411886] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:25:16.601 [2024-11-05 17:04:05.412036] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:25:16.601 [2024-11-05 17:04:05.412152] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:16.601 [2024-11-05 17:04:05.412241] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:25:16.601 request: 00:25:16.601 { 00:25:16.601 "name": "raid_bdev1", 00:25:16.601 "raid_level": "raid5f", 00:25:16.601 "base_bdevs": [ 00:25:16.601 "malloc1", 00:25:16.601 "malloc2", 00:25:16.601 "malloc3", 00:25:16.601 "malloc4" 00:25:16.601 ], 00:25:16.601 "superblock": false, 00:25:16.601 "strip_size_kb": 64, 00:25:16.601 "method": "bdev_raid_create", 00:25:16.601 "req_id": 1 00:25:16.601 } 00:25:16.601 Got JSON-RPC error response 00:25:16.601 response: 00:25:16.601 { 00:25:16.602 "code": -17, 00:25:16.602 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:16.602 } 00:25:16.602 17:04:05 -- common/autotest_common.sh@653 -- # es=1 00:25:16.602 17:04:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:16.602 17:04:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:16.602 17:04:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:16.602 17:04:05 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:25:16.602 17:04:05 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.871 17:04:05 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:25:16.871 17:04:05 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:25:16.871 17:04:05 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:17.129 [2024-11-05 17:04:05.853233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:17.129 [2024-11-05 17:04:05.853433] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.129 [2024-11-05 17:04:05.853503] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:17.129 [2024-11-05 17:04:05.853615] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.129 [2024-11-05 17:04:05.855753] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.129 [2024-11-05 17:04:05.855930] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:17.129 [2024-11-05 17:04:05.856126] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:17.129 [2024-11-05 17:04:05.856294] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:17.129 pt1 00:25:17.129 17:04:05 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:17.129 17:04:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:17.129 17:04:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:17.129 17:04:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:17.129 17:04:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:17.129 17:04:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:17.129 17:04:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:17.129 17:04:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:17.129 17:04:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:17.129 17:04:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:17.129 17:04:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.129 17:04:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.387 17:04:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:17.387 "name": "raid_bdev1", 00:25:17.387 "uuid": "915ad24d-eb96-4c53-84ff-99579bafd0c3", 00:25:17.387 "strip_size_kb": 64, 00:25:17.387 "state": "configuring", 00:25:17.387 "raid_level": "raid5f", 00:25:17.387 "superblock": true, 00:25:17.387 "num_base_bdevs": 4, 00:25:17.387 "num_base_bdevs_discovered": 1, 00:25:17.387 "num_base_bdevs_operational": 4, 00:25:17.387 "base_bdevs_list": [ 00:25:17.387 { 00:25:17.387 "name": "pt1", 00:25:17.387 "uuid": "d6a58213-2516-5b34-817d-fb7f13944714", 00:25:17.387 "is_configured": true, 00:25:17.387 "data_offset": 2048, 00:25:17.387 "data_size": 63488 00:25:17.387 }, 00:25:17.387 { 00:25:17.387 "name": null, 00:25:17.387 "uuid": "cacaacae-7bed-54bd-8765-8786a58ed2f1", 00:25:17.387 "is_configured": false, 00:25:17.387 "data_offset": 2048, 00:25:17.387 "data_size": 63488 00:25:17.387 }, 00:25:17.387 { 00:25:17.387 "name": null, 00:25:17.387 "uuid": "0f7ba813-89f4-58e5-af8a-092c9f82551d", 00:25:17.387 "is_configured": false, 00:25:17.387 "data_offset": 2048, 00:25:17.387 "data_size": 63488 00:25:17.387 }, 00:25:17.387 { 00:25:17.387 "name": null, 00:25:17.387 "uuid": "dc09cd11-9505-5ddb-9b6a-0da62ca0e2b6", 00:25:17.387 "is_configured": false, 00:25:17.387 "data_offset": 2048, 00:25:17.387 "data_size": 63488 00:25:17.387 } 00:25:17.387 ] 00:25:17.387 }' 00:25:17.387 17:04:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:17.387 17:04:06 -- common/autotest_common.sh@10 -- # set +x 00:25:17.953 17:04:06 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:25:17.953 17:04:06 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:18.212 [2024-11-05 17:04:06.933449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:18.212 [2024-11-05 17:04:06.933626] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.212 [2024-11-05 17:04:06.933715] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:18.212 [2024-11-05 17:04:06.934084] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.212 [2024-11-05 17:04:06.934725] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.212 [2024-11-05 17:04:06.934929] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:18.212 [2024-11-05 17:04:06.935118] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:18.212 [2024-11-05 17:04:06.935229] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:18.212 pt2 00:25:18.212 17:04:06 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:18.470 [2024-11-05 17:04:07.189503] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:18.470 17:04:07 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:18.470 17:04:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:18.470 17:04:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:18.470 17:04:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:18.470 17:04:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:18.470 17:04:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:18.470 17:04:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:18.470 17:04:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:18.470 17:04:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:18.470 17:04:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:18.470 17:04:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.470 17:04:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.728 17:04:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:18.728 "name": "raid_bdev1", 00:25:18.728 "uuid": "915ad24d-eb96-4c53-84ff-99579bafd0c3", 00:25:18.728 "strip_size_kb": 64, 00:25:18.728 "state": "configuring", 00:25:18.728 "raid_level": "raid5f", 00:25:18.728 "superblock": true, 00:25:18.728 "num_base_bdevs": 4, 00:25:18.728 "num_base_bdevs_discovered": 1, 00:25:18.728 "num_base_bdevs_operational": 4, 00:25:18.728 "base_bdevs_list": [ 00:25:18.728 { 00:25:18.728 "name": "pt1", 00:25:18.728 "uuid": "d6a58213-2516-5b34-817d-fb7f13944714", 00:25:18.728 "is_configured": true, 00:25:18.728 "data_offset": 2048, 00:25:18.728 "data_size": 63488 00:25:18.728 }, 00:25:18.728 { 00:25:18.728 "name": null, 00:25:18.728 "uuid": "cacaacae-7bed-54bd-8765-8786a58ed2f1", 00:25:18.728 "is_configured": false, 00:25:18.728 "data_offset": 2048, 00:25:18.728 "data_size": 63488 00:25:18.728 }, 00:25:18.728 { 00:25:18.728 "name": null, 00:25:18.728 "uuid": "0f7ba813-89f4-58e5-af8a-092c9f82551d", 00:25:18.728 "is_configured": false, 00:25:18.728 "data_offset": 2048, 00:25:18.728 "data_size": 63488 00:25:18.728 }, 00:25:18.728 { 00:25:18.728 "name": null, 00:25:18.728 "uuid": "dc09cd11-9505-5ddb-9b6a-0da62ca0e2b6", 00:25:18.728 "is_configured": false, 00:25:18.728 "data_offset": 2048, 00:25:18.728 "data_size": 63488 00:25:18.728 } 00:25:18.728 ] 00:25:18.728 }' 00:25:18.728 17:04:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:18.728 17:04:07 -- common/autotest_common.sh@10 -- # set +x 00:25:19.294 17:04:07 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:25:19.294 17:04:07 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:19.294 17:04:07 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:19.552 [2024-11-05 17:04:08.229716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:19.552 [2024-11-05 17:04:08.229890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.552 [2024-11-05 17:04:08.229959] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:19.552 [2024-11-05 17:04:08.230235] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.552 [2024-11-05 17:04:08.230649] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.552 [2024-11-05 17:04:08.230814] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:19.552 [2024-11-05 17:04:08.231020] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:19.552 [2024-11-05 17:04:08.231156] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:19.552 pt2 00:25:19.552 17:04:08 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:19.552 17:04:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:19.552 17:04:08 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:19.552 [2024-11-05 17:04:08.417737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:19.552 [2024-11-05 17:04:08.417915] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.552 [2024-11-05 17:04:08.417975] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:19.552 [2024-11-05 17:04:08.418084] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.552 [2024-11-05 17:04:08.418574] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.552 [2024-11-05 17:04:08.418737] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:19.552 [2024-11-05 17:04:08.418917] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:19.552 [2024-11-05 17:04:08.419033] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:19.552 pt3 00:25:19.552 17:04:08 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:19.552 17:04:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:19.552 17:04:08 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:19.812 [2024-11-05 17:04:08.665795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:19.812 [2024-11-05 17:04:08.665991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.812 [2024-11-05 17:04:08.666055] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:19.812 [2024-11-05 17:04:08.666302] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.812 [2024-11-05 17:04:08.666703] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.812 [2024-11-05 17:04:08.666920] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:19.812 [2024-11-05 17:04:08.667123] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:19.812 [2024-11-05 17:04:08.667249] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:19.812 [2024-11-05 17:04:08.667499] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:25:19.812 [2024-11-05 17:04:08.667612] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:19.812 [2024-11-05 17:04:08.667734] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:19.812 [2024-11-05 17:04:08.672915] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:25:19.812 [2024-11-05 17:04:08.673060] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:25:19.812 [2024-11-05 17:04:08.673347] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:19.812 pt4 00:25:19.812 17:04:08 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:19.812 17:04:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:19.812 17:04:08 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:19.812 17:04:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:19.812 17:04:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:19.812 17:04:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:19.812 17:04:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:19.812 17:04:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:19.812 17:04:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:19.812 17:04:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:19.812 17:04:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:19.812 17:04:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:19.812 17:04:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.812 17:04:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.070 17:04:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:20.070 "name": "raid_bdev1", 00:25:20.070 "uuid": "915ad24d-eb96-4c53-84ff-99579bafd0c3", 00:25:20.070 "strip_size_kb": 64, 00:25:20.070 "state": "online", 00:25:20.070 "raid_level": "raid5f", 00:25:20.070 "superblock": true, 00:25:20.070 "num_base_bdevs": 4, 00:25:20.070 "num_base_bdevs_discovered": 4, 00:25:20.070 "num_base_bdevs_operational": 4, 00:25:20.070 "base_bdevs_list": [ 00:25:20.070 { 00:25:20.070 "name": "pt1", 00:25:20.070 "uuid": "d6a58213-2516-5b34-817d-fb7f13944714", 00:25:20.070 "is_configured": true, 00:25:20.070 "data_offset": 2048, 00:25:20.070 "data_size": 63488 00:25:20.070 }, 00:25:20.070 { 00:25:20.070 "name": "pt2", 00:25:20.070 "uuid": "cacaacae-7bed-54bd-8765-8786a58ed2f1", 00:25:20.070 "is_configured": true, 00:25:20.070 "data_offset": 2048, 00:25:20.070 "data_size": 63488 00:25:20.070 }, 00:25:20.070 { 00:25:20.070 "name": "pt3", 00:25:20.070 "uuid": "0f7ba813-89f4-58e5-af8a-092c9f82551d", 00:25:20.070 "is_configured": true, 00:25:20.070 "data_offset": 2048, 00:25:20.070 "data_size": 63488 00:25:20.070 }, 00:25:20.070 { 00:25:20.070 "name": "pt4", 00:25:20.070 "uuid": "dc09cd11-9505-5ddb-9b6a-0da62ca0e2b6", 00:25:20.070 "is_configured": true, 00:25:20.070 "data_offset": 2048, 00:25:20.070 "data_size": 63488 00:25:20.070 } 00:25:20.070 ] 00:25:20.070 }' 00:25:20.070 17:04:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:20.070 17:04:08 -- common/autotest_common.sh@10 -- # set +x 00:25:20.636 17:04:09 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:20.636 17:04:09 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:25:20.893 [2024-11-05 17:04:09.670528] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:20.893 17:04:09 -- bdev/bdev_raid.sh@430 -- # '[' 915ad24d-eb96-4c53-84ff-99579bafd0c3 '!=' 915ad24d-eb96-4c53-84ff-99579bafd0c3 ']' 00:25:20.893 17:04:09 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:25:20.893 17:04:09 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:20.893 17:04:09 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:20.893 17:04:09 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:21.151 [2024-11-05 17:04:09.902487] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:21.151 17:04:09 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:21.151 17:04:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:21.151 17:04:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:21.151 17:04:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:21.151 17:04:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:21.151 17:04:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:21.151 17:04:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:21.151 17:04:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:21.151 17:04:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:21.151 17:04:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:21.151 17:04:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.151 17:04:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.409 17:04:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:21.409 "name": "raid_bdev1", 00:25:21.409 "uuid": "915ad24d-eb96-4c53-84ff-99579bafd0c3", 00:25:21.409 "strip_size_kb": 64, 00:25:21.409 "state": "online", 00:25:21.409 "raid_level": "raid5f", 00:25:21.409 "superblock": true, 00:25:21.409 "num_base_bdevs": 4, 00:25:21.409 "num_base_bdevs_discovered": 3, 00:25:21.409 "num_base_bdevs_operational": 3, 00:25:21.409 "base_bdevs_list": [ 00:25:21.409 { 00:25:21.409 "name": null, 00:25:21.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.409 "is_configured": false, 00:25:21.409 "data_offset": 2048, 00:25:21.409 "data_size": 63488 00:25:21.409 }, 00:25:21.409 { 00:25:21.409 "name": "pt2", 00:25:21.409 "uuid": "cacaacae-7bed-54bd-8765-8786a58ed2f1", 00:25:21.409 "is_configured": true, 00:25:21.409 "data_offset": 2048, 00:25:21.409 "data_size": 63488 00:25:21.409 }, 00:25:21.409 { 00:25:21.409 "name": "pt3", 00:25:21.409 "uuid": "0f7ba813-89f4-58e5-af8a-092c9f82551d", 00:25:21.409 "is_configured": true, 00:25:21.409 "data_offset": 2048, 00:25:21.409 "data_size": 63488 00:25:21.409 }, 00:25:21.409 { 00:25:21.409 "name": "pt4", 00:25:21.409 "uuid": "dc09cd11-9505-5ddb-9b6a-0da62ca0e2b6", 00:25:21.409 "is_configured": true, 00:25:21.409 "data_offset": 2048, 00:25:21.409 "data_size": 63488 00:25:21.409 } 00:25:21.409 ] 00:25:21.409 }' 00:25:21.409 17:04:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:21.409 17:04:10 -- common/autotest_common.sh@10 -- # set +x 00:25:21.974 17:04:10 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:22.232 [2024-11-05 17:04:10.998671] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:22.232 [2024-11-05 17:04:10.999095] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:22.232 [2024-11-05 17:04:10.999261] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:22.232 [2024-11-05 17:04:10.999470] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:22.232 [2024-11-05 17:04:10.999579] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:25:22.232 17:04:11 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.232 17:04:11 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:25:22.490 17:04:11 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:25:22.490 17:04:11 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:25:22.490 17:04:11 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:25:22.490 17:04:11 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:22.490 17:04:11 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:22.749 17:04:11 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:22.749 17:04:11 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:22.749 17:04:11 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:22.749 17:04:11 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:22.749 17:04:11 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:22.749 17:04:11 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:23.007 17:04:11 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:23.007 17:04:11 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:23.007 17:04:11 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:25:23.007 17:04:11 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:23.007 17:04:11 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:23.265 [2024-11-05 17:04:11.974711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:23.265 [2024-11-05 17:04:11.974904] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.265 [2024-11-05 17:04:11.974974] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:23.265 [2024-11-05 17:04:11.975211] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.265 [2024-11-05 17:04:11.977316] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.265 [2024-11-05 17:04:11.977526] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:23.265 [2024-11-05 17:04:11.977722] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:23.265 [2024-11-05 17:04:11.977873] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:23.265 pt2 00:25:23.265 17:04:11 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:23.265 17:04:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:23.265 17:04:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:23.265 17:04:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:23.265 17:04:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:23.265 17:04:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:23.265 17:04:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:23.265 17:04:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:23.265 17:04:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:23.265 17:04:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:23.265 17:04:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.265 17:04:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.523 17:04:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:23.523 "name": "raid_bdev1", 00:25:23.523 "uuid": "915ad24d-eb96-4c53-84ff-99579bafd0c3", 00:25:23.523 "strip_size_kb": 64, 00:25:23.523 "state": "configuring", 00:25:23.523 "raid_level": "raid5f", 00:25:23.523 "superblock": true, 00:25:23.523 "num_base_bdevs": 4, 00:25:23.523 "num_base_bdevs_discovered": 1, 00:25:23.523 "num_base_bdevs_operational": 3, 00:25:23.523 "base_bdevs_list": [ 00:25:23.523 { 00:25:23.523 "name": null, 00:25:23.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.523 "is_configured": false, 00:25:23.523 "data_offset": 2048, 00:25:23.523 "data_size": 63488 00:25:23.523 }, 00:25:23.523 { 00:25:23.523 "name": "pt2", 00:25:23.523 "uuid": "cacaacae-7bed-54bd-8765-8786a58ed2f1", 00:25:23.523 "is_configured": true, 00:25:23.523 "data_offset": 2048, 00:25:23.523 "data_size": 63488 00:25:23.523 }, 00:25:23.523 { 00:25:23.523 "name": null, 00:25:23.523 "uuid": "0f7ba813-89f4-58e5-af8a-092c9f82551d", 00:25:23.523 "is_configured": false, 00:25:23.523 "data_offset": 2048, 00:25:23.523 "data_size": 63488 00:25:23.523 }, 00:25:23.523 { 00:25:23.523 "name": null, 00:25:23.523 "uuid": "dc09cd11-9505-5ddb-9b6a-0da62ca0e2b6", 00:25:23.523 "is_configured": false, 00:25:23.523 "data_offset": 2048, 00:25:23.523 "data_size": 63488 00:25:23.523 } 00:25:23.523 ] 00:25:23.523 }' 00:25:23.523 17:04:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:23.523 17:04:12 -- common/autotest_common.sh@10 -- # set +x 00:25:24.089 17:04:12 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:25:24.089 17:04:12 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:24.089 17:04:12 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:24.347 [2024-11-05 17:04:13.020422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:24.347 [2024-11-05 17:04:13.020666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.347 [2024-11-05 17:04:13.020746] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:25:24.347 [2024-11-05 17:04:13.020937] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.347 [2024-11-05 17:04:13.021437] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.347 [2024-11-05 17:04:13.021646] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:24.347 [2024-11-05 17:04:13.021885] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:24.347 [2024-11-05 17:04:13.022019] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:24.347 pt3 00:25:24.347 17:04:13 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:24.347 17:04:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:24.347 17:04:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:24.347 17:04:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:24.347 17:04:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:24.347 17:04:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:24.347 17:04:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:24.347 17:04:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:24.347 17:04:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:24.347 17:04:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:24.347 17:04:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.347 17:04:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.605 17:04:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:24.605 "name": "raid_bdev1", 00:25:24.605 "uuid": "915ad24d-eb96-4c53-84ff-99579bafd0c3", 00:25:24.605 "strip_size_kb": 64, 00:25:24.605 "state": "configuring", 00:25:24.605 "raid_level": "raid5f", 00:25:24.605 "superblock": true, 00:25:24.605 "num_base_bdevs": 4, 00:25:24.605 "num_base_bdevs_discovered": 2, 00:25:24.605 "num_base_bdevs_operational": 3, 00:25:24.605 "base_bdevs_list": [ 00:25:24.605 { 00:25:24.605 "name": null, 00:25:24.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.605 "is_configured": false, 00:25:24.605 "data_offset": 2048, 00:25:24.605 "data_size": 63488 00:25:24.605 }, 00:25:24.605 { 00:25:24.605 "name": "pt2", 00:25:24.605 "uuid": "cacaacae-7bed-54bd-8765-8786a58ed2f1", 00:25:24.605 "is_configured": true, 00:25:24.605 "data_offset": 2048, 00:25:24.605 "data_size": 63488 00:25:24.605 }, 00:25:24.605 { 00:25:24.605 "name": "pt3", 00:25:24.605 "uuid": "0f7ba813-89f4-58e5-af8a-092c9f82551d", 00:25:24.605 "is_configured": true, 00:25:24.605 "data_offset": 2048, 00:25:24.605 "data_size": 63488 00:25:24.605 }, 00:25:24.605 { 00:25:24.605 "name": null, 00:25:24.605 "uuid": "dc09cd11-9505-5ddb-9b6a-0da62ca0e2b6", 00:25:24.605 "is_configured": false, 00:25:24.605 "data_offset": 2048, 00:25:24.605 "data_size": 63488 00:25:24.605 } 00:25:24.605 ] 00:25:24.605 }' 00:25:24.605 17:04:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:24.605 17:04:13 -- common/autotest_common.sh@10 -- # set +x 00:25:25.171 17:04:13 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:25:25.171 17:04:13 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:25.171 17:04:13 -- bdev/bdev_raid.sh@462 -- # i=3 00:25:25.171 17:04:13 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:25.171 [2024-11-05 17:04:13.988597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:25.171 [2024-11-05 17:04:13.988786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:25.171 [2024-11-05 17:04:13.988868] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:25:25.171 [2024-11-05 17:04:13.989118] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:25.171 [2024-11-05 17:04:13.989599] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:25.171 [2024-11-05 17:04:13.989762] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:25.171 [2024-11-05 17:04:13.989954] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:25.171 [2024-11-05 17:04:13.990078] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:25.171 [2024-11-05 17:04:13.990242] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80 00:25:25.171 [2024-11-05 17:04:13.990343] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:25.171 [2024-11-05 17:04:13.990485] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:25:25.171 [2024-11-05 17:04:13.995898] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80 00:25:25.171 [2024-11-05 17:04:13.996042] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80 00:25:25.171 [2024-11-05 17:04:13.996419] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:25.171 pt4 00:25:25.171 17:04:14 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:25.171 17:04:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:25.171 17:04:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:25.171 17:04:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:25.171 17:04:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:25.171 17:04:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:25.171 17:04:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:25.171 17:04:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:25.171 17:04:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:25.171 17:04:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:25.171 17:04:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.171 17:04:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.429 17:04:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:25.429 "name": "raid_bdev1", 00:25:25.429 "uuid": "915ad24d-eb96-4c53-84ff-99579bafd0c3", 00:25:25.429 "strip_size_kb": 64, 00:25:25.429 "state": "online", 00:25:25.429 "raid_level": "raid5f", 00:25:25.429 "superblock": true, 00:25:25.429 "num_base_bdevs": 4, 00:25:25.429 "num_base_bdevs_discovered": 3, 00:25:25.429 "num_base_bdevs_operational": 3, 00:25:25.429 "base_bdevs_list": [ 00:25:25.429 { 00:25:25.429 "name": null, 00:25:25.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.429 "is_configured": false, 00:25:25.429 "data_offset": 2048, 00:25:25.429 "data_size": 63488 00:25:25.429 }, 00:25:25.429 { 00:25:25.429 "name": "pt2", 00:25:25.429 "uuid": "cacaacae-7bed-54bd-8765-8786a58ed2f1", 00:25:25.429 "is_configured": true, 00:25:25.429 "data_offset": 2048, 00:25:25.429 "data_size": 63488 00:25:25.429 }, 00:25:25.429 { 00:25:25.429 "name": "pt3", 00:25:25.429 "uuid": "0f7ba813-89f4-58e5-af8a-092c9f82551d", 00:25:25.429 "is_configured": true, 00:25:25.429 "data_offset": 2048, 00:25:25.429 "data_size": 63488 00:25:25.429 }, 00:25:25.429 { 00:25:25.429 "name": "pt4", 00:25:25.429 "uuid": "dc09cd11-9505-5ddb-9b6a-0da62ca0e2b6", 00:25:25.429 "is_configured": true, 00:25:25.429 "data_offset": 2048, 00:25:25.429 "data_size": 63488 00:25:25.429 } 00:25:25.429 ] 00:25:25.429 }' 00:25:25.429 17:04:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:25.429 17:04:14 -- common/autotest_common.sh@10 -- # set +x 00:25:26.364 17:04:14 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:25:26.364 17:04:14 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:26.364 [2024-11-05 17:04:15.151268] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:26.364 [2024-11-05 17:04:15.151682] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:26.364 [2024-11-05 17:04:15.151896] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:26.364 [2024-11-05 17:04:15.152116] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:26.364 [2024-11-05 17:04:15.152237] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline 00:25:26.364 17:04:15 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.364 17:04:15 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:25:26.621 17:04:15 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:25:26.621 17:04:15 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:25:26.621 17:04:15 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:26.878 [2024-11-05 17:04:15.551468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:26.879 [2024-11-05 17:04:15.551673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.879 [2024-11-05 17:04:15.551752] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:25:26.879 [2024-11-05 17:04:15.551982] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.879 [2024-11-05 17:04:15.554162] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.879 [2024-11-05 17:04:15.554368] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:26.879 [2024-11-05 17:04:15.554577] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:26.879 [2024-11-05 17:04:15.554729] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:26.879 pt1 00:25:26.879 17:04:15 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:26.879 17:04:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:26.879 17:04:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:26.879 17:04:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:26.879 17:04:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:26.879 17:04:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:26.879 17:04:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:26.879 17:04:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:26.879 17:04:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:26.879 17:04:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:26.879 17:04:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.879 17:04:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.136 17:04:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:27.136 "name": "raid_bdev1", 00:25:27.136 "uuid": "915ad24d-eb96-4c53-84ff-99579bafd0c3", 00:25:27.136 "strip_size_kb": 64, 00:25:27.136 "state": "configuring", 00:25:27.136 "raid_level": "raid5f", 00:25:27.136 "superblock": true, 00:25:27.136 "num_base_bdevs": 4, 00:25:27.136 "num_base_bdevs_discovered": 1, 00:25:27.136 "num_base_bdevs_operational": 4, 00:25:27.136 "base_bdevs_list": [ 00:25:27.136 { 00:25:27.136 "name": "pt1", 00:25:27.136 "uuid": "d6a58213-2516-5b34-817d-fb7f13944714", 00:25:27.136 "is_configured": true, 00:25:27.136 "data_offset": 2048, 00:25:27.136 "data_size": 63488 00:25:27.136 }, 00:25:27.136 { 00:25:27.136 "name": null, 00:25:27.136 "uuid": "cacaacae-7bed-54bd-8765-8786a58ed2f1", 00:25:27.136 "is_configured": false, 00:25:27.136 "data_offset": 2048, 00:25:27.136 "data_size": 63488 00:25:27.136 }, 00:25:27.136 { 00:25:27.136 "name": null, 00:25:27.136 "uuid": "0f7ba813-89f4-58e5-af8a-092c9f82551d", 00:25:27.136 "is_configured": false, 00:25:27.136 "data_offset": 2048, 00:25:27.136 "data_size": 63488 00:25:27.136 }, 00:25:27.136 { 00:25:27.136 "name": null, 00:25:27.136 "uuid": "dc09cd11-9505-5ddb-9b6a-0da62ca0e2b6", 00:25:27.136 "is_configured": false, 00:25:27.136 "data_offset": 2048, 00:25:27.136 "data_size": 63488 00:25:27.136 } 00:25:27.136 ] 00:25:27.136 }' 00:25:27.136 17:04:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:27.136 17:04:15 -- common/autotest_common.sh@10 -- # set +x 00:25:27.702 17:04:16 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:25:27.702 17:04:16 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:27.702 17:04:16 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:27.960 17:04:16 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:27.960 17:04:16 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:27.960 17:04:16 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:28.217 17:04:16 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:28.217 17:04:16 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:28.217 17:04:16 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:28.217 17:04:17 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:28.217 17:04:17 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:28.217 17:04:17 -- bdev/bdev_raid.sh@489 -- # i=3 00:25:28.217 17:04:17 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:28.476 [2024-11-05 17:04:17.239524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:28.476 [2024-11-05 17:04:17.239939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:28.476 [2024-11-05 17:04:17.240096] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:25:28.476 [2024-11-05 17:04:17.240234] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:28.476 [2024-11-05 17:04:17.240755] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:28.476 [2024-11-05 17:04:17.240941] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:28.476 [2024-11-05 17:04:17.241160] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:28.476 [2024-11-05 17:04:17.241287] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:28.476 [2024-11-05 17:04:17.241394] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:28.476 [2024-11-05 17:04:17.241452] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c980 name raid_bdev1, state configuring 00:25:28.476 [2024-11-05 17:04:17.241640] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:28.476 pt4 00:25:28.476 17:04:17 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:28.476 17:04:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:28.476 17:04:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:28.476 17:04:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:28.476 17:04:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:28.476 17:04:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:28.476 17:04:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:28.476 17:04:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:28.476 17:04:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:28.476 17:04:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:28.476 17:04:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.476 17:04:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.734 17:04:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:28.734 "name": "raid_bdev1", 00:25:28.734 "uuid": "915ad24d-eb96-4c53-84ff-99579bafd0c3", 00:25:28.734 "strip_size_kb": 64, 00:25:28.734 "state": "configuring", 00:25:28.734 "raid_level": "raid5f", 00:25:28.734 "superblock": true, 00:25:28.734 "num_base_bdevs": 4, 00:25:28.734 "num_base_bdevs_discovered": 1, 00:25:28.734 "num_base_bdevs_operational": 3, 00:25:28.734 "base_bdevs_list": [ 00:25:28.734 { 00:25:28.734 "name": null, 00:25:28.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.734 "is_configured": false, 00:25:28.734 "data_offset": 2048, 00:25:28.734 "data_size": 63488 00:25:28.734 }, 00:25:28.734 { 00:25:28.734 "name": null, 00:25:28.734 "uuid": "cacaacae-7bed-54bd-8765-8786a58ed2f1", 00:25:28.734 "is_configured": false, 00:25:28.734 "data_offset": 2048, 00:25:28.734 "data_size": 63488 00:25:28.734 }, 00:25:28.734 { 00:25:28.734 "name": null, 00:25:28.734 "uuid": "0f7ba813-89f4-58e5-af8a-092c9f82551d", 00:25:28.734 "is_configured": false, 00:25:28.734 "data_offset": 2048, 00:25:28.734 "data_size": 63488 00:25:28.734 }, 00:25:28.734 { 00:25:28.734 "name": "pt4", 00:25:28.734 "uuid": "dc09cd11-9505-5ddb-9b6a-0da62ca0e2b6", 00:25:28.734 "is_configured": true, 00:25:28.734 "data_offset": 2048, 00:25:28.734 "data_size": 63488 00:25:28.734 } 00:25:28.734 ] 00:25:28.734 }' 00:25:28.734 17:04:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:28.734 17:04:17 -- common/autotest_common.sh@10 -- # set +x 00:25:29.300 17:04:18 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:25:29.300 17:04:18 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:29.300 17:04:18 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:29.558 [2024-11-05 17:04:18.307813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:29.558 [2024-11-05 17:04:18.308071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.558 [2024-11-05 17:04:18.308235] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d280 00:25:29.558 [2024-11-05 17:04:18.308372] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.558 [2024-11-05 17:04:18.308910] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.558 [2024-11-05 17:04:18.309153] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:29.558 [2024-11-05 17:04:18.309365] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:29.558 [2024-11-05 17:04:18.309520] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:29.558 pt2 00:25:29.558 17:04:18 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:25:29.558 17:04:18 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:29.558 17:04:18 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:29.818 [2024-11-05 17:04:18.495875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:29.819 [2024-11-05 17:04:18.496126] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.819 [2024-11-05 17:04:18.496293] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:25:29.819 [2024-11-05 17:04:18.496434] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.819 [2024-11-05 17:04:18.497006] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.819 [2024-11-05 17:04:18.497202] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:29.819 [2024-11-05 17:04:18.497417] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:29.819 [2024-11-05 17:04:18.497570] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:29.819 [2024-11-05 17:04:18.497888] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cf80 00:25:29.819 [2024-11-05 17:04:18.497995] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:29.819 [2024-11-05 17:04:18.498139] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:25:29.819 [2024-11-05 17:04:18.504233] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cf80 00:25:29.819 [2024-11-05 17:04:18.504389] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cf80 00:25:29.819 [2024-11-05 17:04:18.504794] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.819 pt3 00:25:29.819 17:04:18 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:25:29.819 17:04:18 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:29.819 17:04:18 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:29.819 17:04:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:29.819 17:04:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:29.819 17:04:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:29.819 17:04:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:29.819 17:04:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:29.819 17:04:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:29.819 17:04:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:29.819 17:04:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:29.819 17:04:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:29.819 17:04:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.819 17:04:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.819 17:04:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:29.819 "name": "raid_bdev1", 00:25:29.819 "uuid": "915ad24d-eb96-4c53-84ff-99579bafd0c3", 00:25:29.819 "strip_size_kb": 64, 00:25:29.819 "state": "online", 00:25:29.819 "raid_level": "raid5f", 00:25:29.819 "superblock": true, 00:25:29.819 "num_base_bdevs": 4, 00:25:29.819 "num_base_bdevs_discovered": 3, 00:25:29.819 "num_base_bdevs_operational": 3, 00:25:29.819 "base_bdevs_list": [ 00:25:29.819 { 00:25:29.819 "name": null, 00:25:29.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.819 "is_configured": false, 00:25:29.819 "data_offset": 2048, 00:25:29.819 "data_size": 63488 00:25:29.819 }, 00:25:29.819 { 00:25:29.819 "name": "pt2", 00:25:29.819 "uuid": "cacaacae-7bed-54bd-8765-8786a58ed2f1", 00:25:29.819 "is_configured": true, 00:25:29.819 "data_offset": 2048, 00:25:29.819 "data_size": 63488 00:25:29.819 }, 00:25:29.819 { 00:25:29.819 "name": "pt3", 00:25:29.819 "uuid": "0f7ba813-89f4-58e5-af8a-092c9f82551d", 00:25:29.819 "is_configured": true, 00:25:29.819 "data_offset": 2048, 00:25:29.819 "data_size": 63488 00:25:29.819 }, 00:25:29.819 { 00:25:29.819 "name": "pt4", 00:25:29.819 "uuid": "dc09cd11-9505-5ddb-9b6a-0da62ca0e2b6", 00:25:29.819 "is_configured": true, 00:25:29.819 "data_offset": 2048, 00:25:29.819 "data_size": 63488 00:25:29.819 } 00:25:29.819 ] 00:25:29.819 }' 00:25:29.819 17:04:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:29.819 17:04:18 -- common/autotest_common.sh@10 -- # set +x 00:25:30.417 17:04:19 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:30.417 17:04:19 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:25:30.675 [2024-11-05 17:04:19.475921] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:30.675 17:04:19 -- bdev/bdev_raid.sh@506 -- # '[' 915ad24d-eb96-4c53-84ff-99579bafd0c3 '!=' 915ad24d-eb96-4c53-84ff-99579bafd0c3 ']' 00:25:30.675 17:04:19 -- bdev/bdev_raid.sh@511 -- # killprocess 130393 00:25:30.675 17:04:19 -- common/autotest_common.sh@936 -- # '[' -z 130393 ']' 00:25:30.675 17:04:19 -- common/autotest_common.sh@940 -- # kill -0 130393 00:25:30.675 17:04:19 -- common/autotest_common.sh@941 -- # uname 00:25:30.675 17:04:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:30.675 17:04:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130393 00:25:30.675 killing process with pid 130393 00:25:30.675 17:04:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:30.675 17:04:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:30.675 17:04:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130393' 00:25:30.675 17:04:19 -- common/autotest_common.sh@955 -- # kill 130393 00:25:30.675 [2024-11-05 17:04:19.512613] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:30.675 17:04:19 -- common/autotest_common.sh@960 -- # wait 130393 00:25:30.675 [2024-11-05 17:04:19.512683] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:30.675 [2024-11-05 17:04:19.512756] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:30.675 [2024-11-05 17:04:19.512768] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state offline 00:25:30.939 [2024-11-05 17:04:19.771303] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:31.875 ************************************ 00:25:31.875 END TEST raid5f_superblock_test 00:25:31.875 ************************************ 00:25:31.875 17:04:20 -- bdev/bdev_raid.sh@513 -- # return 0 00:25:31.875 00:25:31.875 real 0m20.927s 00:25:31.875 user 0m38.247s 00:25:31.875 sys 0m2.550s 00:25:31.875 17:04:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:31.875 17:04:20 -- common/autotest_common.sh@10 -- # set +x 00:25:31.875 17:04:20 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:25:31.875 17:04:20 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:25:31.875 17:04:20 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:25:31.875 17:04:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:31.875 17:04:20 -- common/autotest_common.sh@10 -- # set +x 00:25:32.133 ************************************ 00:25:32.133 START TEST raid5f_rebuild_test 00:25:32.133 ************************************ 00:25:32.133 17:04:20 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 false false 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@544 -- # raid_pid=131056 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@545 -- # waitforlisten 131056 /var/tmp/spdk-raid.sock 00:25:32.133 17:04:20 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:32.133 17:04:20 -- common/autotest_common.sh@829 -- # '[' -z 131056 ']' 00:25:32.133 17:04:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:32.133 17:04:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:32.133 17:04:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:32.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:32.133 17:04:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:32.133 17:04:20 -- common/autotest_common.sh@10 -- # set +x 00:25:32.133 [2024-11-05 17:04:20.842563] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:32.133 [2024-11-05 17:04:20.843001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131056 ] 00:25:32.133 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:32.133 Zero copy mechanism will not be used. 00:25:32.133 [2024-11-05 17:04:21.003685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.391 [2024-11-05 17:04:21.224091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.649 [2024-11-05 17:04:21.389692] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:32.907 17:04:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:32.907 17:04:21 -- common/autotest_common.sh@862 -- # return 0 00:25:32.907 17:04:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:32.907 17:04:21 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:32.907 17:04:21 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:33.165 BaseBdev1 00:25:33.165 17:04:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:33.165 17:04:22 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:33.165 17:04:22 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:33.423 BaseBdev2 00:25:33.423 17:04:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:33.423 17:04:22 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:33.423 17:04:22 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:33.681 BaseBdev3 00:25:33.681 17:04:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:33.681 17:04:22 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:33.681 17:04:22 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:33.939 BaseBdev4 00:25:33.939 17:04:22 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:34.197 spare_malloc 00:25:34.197 17:04:23 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:34.455 spare_delay 00:25:34.455 17:04:23 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:34.713 [2024-11-05 17:04:23.471770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:34.713 [2024-11-05 17:04:23.472975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:34.713 [2024-11-05 17:04:23.473049] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:25:34.713 [2024-11-05 17:04:23.473204] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:34.713 [2024-11-05 17:04:23.475526] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:34.713 [2024-11-05 17:04:23.475727] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:34.713 spare 00:25:34.713 17:04:23 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:34.971 [2024-11-05 17:04:23.656064] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:34.971 [2024-11-05 17:04:23.657868] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:34.971 [2024-11-05 17:04:23.658038] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:34.971 [2024-11-05 17:04:23.658116] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:34.971 [2024-11-05 17:04:23.658318] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:25:34.971 [2024-11-05 17:04:23.658426] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:34.971 [2024-11-05 17:04:23.658591] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:25:34.971 [2024-11-05 17:04:23.664163] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:25:34.971 [2024-11-05 17:04:23.664314] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:25:34.971 [2024-11-05 17:04:23.664629] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:34.971 17:04:23 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:34.971 17:04:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:34.986 17:04:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:34.986 17:04:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:34.986 17:04:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:34.986 17:04:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:34.986 17:04:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:34.986 17:04:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:34.986 17:04:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:34.986 17:04:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:34.986 17:04:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.986 17:04:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.244 17:04:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:35.244 "name": "raid_bdev1", 00:25:35.244 "uuid": "0e877de2-3c13-4bd4-8e5e-adadb8d580ef", 00:25:35.244 "strip_size_kb": 64, 00:25:35.244 "state": "online", 00:25:35.244 "raid_level": "raid5f", 00:25:35.244 "superblock": false, 00:25:35.244 "num_base_bdevs": 4, 00:25:35.244 "num_base_bdevs_discovered": 4, 00:25:35.244 "num_base_bdevs_operational": 4, 00:25:35.244 "base_bdevs_list": [ 00:25:35.244 { 00:25:35.244 "name": "BaseBdev1", 00:25:35.244 "uuid": "a0144921-fa65-4745-a89d-c0116df9a122", 00:25:35.244 "is_configured": true, 00:25:35.244 "data_offset": 0, 00:25:35.244 "data_size": 65536 00:25:35.244 }, 00:25:35.244 { 00:25:35.244 "name": "BaseBdev2", 00:25:35.244 "uuid": "cd206dd1-3d0d-4b0f-99dc-1625294ea184", 00:25:35.244 "is_configured": true, 00:25:35.244 "data_offset": 0, 00:25:35.244 "data_size": 65536 00:25:35.244 }, 00:25:35.244 { 00:25:35.244 "name": "BaseBdev3", 00:25:35.244 "uuid": "84b076d7-d1d6-4a8a-973d-40ca5b8b0278", 00:25:35.244 "is_configured": true, 00:25:35.244 "data_offset": 0, 00:25:35.244 "data_size": 65536 00:25:35.244 }, 00:25:35.244 { 00:25:35.244 "name": "BaseBdev4", 00:25:35.244 "uuid": "d28e1646-74d7-46d1-81e9-e74f34bad319", 00:25:35.244 "is_configured": true, 00:25:35.244 "data_offset": 0, 00:25:35.244 "data_size": 65536 00:25:35.244 } 00:25:35.244 ] 00:25:35.244 }' 00:25:35.244 17:04:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:35.244 17:04:23 -- common/autotest_common.sh@10 -- # set +x 00:25:35.810 17:04:24 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:35.810 17:04:24 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:35.810 [2024-11-05 17:04:24.699045] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:36.068 17:04:24 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:25:36.068 17:04:24 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.068 17:04:24 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:36.327 17:04:24 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:25:36.327 17:04:24 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:36.327 17:04:24 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:36.327 17:04:24 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:36.327 17:04:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:36.327 17:04:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:36.327 17:04:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:36.327 17:04:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:36.327 17:04:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:36.327 17:04:24 -- bdev/nbd_common.sh@12 -- # local i 00:25:36.327 17:04:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:36.327 17:04:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:36.327 17:04:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:36.327 [2024-11-05 17:04:25.219023] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:25:36.585 /dev/nbd0 00:25:36.585 17:04:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:36.585 17:04:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:36.585 17:04:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:36.585 17:04:25 -- common/autotest_common.sh@867 -- # local i 00:25:36.585 17:04:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:36.585 17:04:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:36.585 17:04:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:36.585 17:04:25 -- common/autotest_common.sh@871 -- # break 00:25:36.585 17:04:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:36.585 17:04:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:36.585 17:04:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:36.585 1+0 records in 00:25:36.585 1+0 records out 00:25:36.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561565 s, 7.3 MB/s 00:25:36.585 17:04:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:36.585 17:04:25 -- common/autotest_common.sh@884 -- # size=4096 00:25:36.585 17:04:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:36.585 17:04:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:36.585 17:04:25 -- common/autotest_common.sh@887 -- # return 0 00:25:36.585 17:04:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:36.585 17:04:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:36.585 17:04:25 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:36.585 17:04:25 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:25:36.585 17:04:25 -- bdev/bdev_raid.sh@582 -- # echo 192 00:25:36.585 17:04:25 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:25:37.152 512+0 records in 00:25:37.152 512+0 records out 00:25:37.152 100663296 bytes (101 MB, 96 MiB) copied, 0.446206 s, 226 MB/s 00:25:37.152 17:04:25 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:37.152 17:04:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:37.152 17:04:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:37.152 17:04:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:37.152 17:04:25 -- bdev/nbd_common.sh@51 -- # local i 00:25:37.152 17:04:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:37.152 17:04:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:37.152 17:04:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:37.152 17:04:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:37.152 17:04:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:37.152 17:04:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:37.152 17:04:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:37.152 17:04:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:37.152 [2024-11-05 17:04:25.952848] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:37.152 17:04:25 -- bdev/nbd_common.sh@41 -- # break 00:25:37.152 17:04:25 -- bdev/nbd_common.sh@45 -- # return 0 00:25:37.152 17:04:25 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:37.410 [2024-11-05 17:04:26.200879] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:37.410 17:04:26 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:37.410 17:04:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:37.410 17:04:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:37.410 17:04:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:37.410 17:04:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:37.410 17:04:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:37.410 17:04:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:37.410 17:04:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:37.410 17:04:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:37.410 17:04:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:37.410 17:04:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.410 17:04:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.668 17:04:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:37.668 "name": "raid_bdev1", 00:25:37.668 "uuid": "0e877de2-3c13-4bd4-8e5e-adadb8d580ef", 00:25:37.668 "strip_size_kb": 64, 00:25:37.668 "state": "online", 00:25:37.668 "raid_level": "raid5f", 00:25:37.668 "superblock": false, 00:25:37.668 "num_base_bdevs": 4, 00:25:37.668 "num_base_bdevs_discovered": 3, 00:25:37.668 "num_base_bdevs_operational": 3, 00:25:37.668 "base_bdevs_list": [ 00:25:37.668 { 00:25:37.668 "name": null, 00:25:37.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.668 "is_configured": false, 00:25:37.668 "data_offset": 0, 00:25:37.668 "data_size": 65536 00:25:37.668 }, 00:25:37.668 { 00:25:37.668 "name": "BaseBdev2", 00:25:37.668 "uuid": "cd206dd1-3d0d-4b0f-99dc-1625294ea184", 00:25:37.668 "is_configured": true, 00:25:37.668 "data_offset": 0, 00:25:37.668 "data_size": 65536 00:25:37.668 }, 00:25:37.668 { 00:25:37.668 "name": "BaseBdev3", 00:25:37.668 "uuid": "84b076d7-d1d6-4a8a-973d-40ca5b8b0278", 00:25:37.668 "is_configured": true, 00:25:37.668 "data_offset": 0, 00:25:37.668 "data_size": 65536 00:25:37.668 }, 00:25:37.668 { 00:25:37.668 "name": "BaseBdev4", 00:25:37.668 "uuid": "d28e1646-74d7-46d1-81e9-e74f34bad319", 00:25:37.668 "is_configured": true, 00:25:37.668 "data_offset": 0, 00:25:37.668 "data_size": 65536 00:25:37.668 } 00:25:37.668 ] 00:25:37.668 }' 00:25:37.668 17:04:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:37.668 17:04:26 -- common/autotest_common.sh@10 -- # set +x 00:25:38.235 17:04:27 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:38.493 [2024-11-05 17:04:27.221055] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:38.493 [2024-11-05 17:04:27.221213] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:38.493 [2024-11-05 17:04:27.231692] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:25:38.493 [2024-11-05 17:04:27.238545] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:38.493 17:04:27 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:39.428 17:04:28 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:39.428 17:04:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:39.428 17:04:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:39.428 17:04:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:39.428 17:04:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:39.428 17:04:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.428 17:04:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.686 17:04:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:39.686 "name": "raid_bdev1", 00:25:39.686 "uuid": "0e877de2-3c13-4bd4-8e5e-adadb8d580ef", 00:25:39.686 "strip_size_kb": 64, 00:25:39.686 "state": "online", 00:25:39.686 "raid_level": "raid5f", 00:25:39.686 "superblock": false, 00:25:39.686 "num_base_bdevs": 4, 00:25:39.686 "num_base_bdevs_discovered": 4, 00:25:39.686 "num_base_bdevs_operational": 4, 00:25:39.686 "process": { 00:25:39.686 "type": "rebuild", 00:25:39.686 "target": "spare", 00:25:39.686 "progress": { 00:25:39.686 "blocks": 21120, 00:25:39.686 "percent": 10 00:25:39.686 } 00:25:39.686 }, 00:25:39.686 "base_bdevs_list": [ 00:25:39.686 { 00:25:39.686 "name": "spare", 00:25:39.686 "uuid": "f6cb5ebc-c089-5cd9-9f5a-f30e168bc1ba", 00:25:39.686 "is_configured": true, 00:25:39.686 "data_offset": 0, 00:25:39.686 "data_size": 65536 00:25:39.686 }, 00:25:39.686 { 00:25:39.686 "name": "BaseBdev2", 00:25:39.686 "uuid": "cd206dd1-3d0d-4b0f-99dc-1625294ea184", 00:25:39.686 "is_configured": true, 00:25:39.686 "data_offset": 0, 00:25:39.686 "data_size": 65536 00:25:39.686 }, 00:25:39.686 { 00:25:39.686 "name": "BaseBdev3", 00:25:39.686 "uuid": "84b076d7-d1d6-4a8a-973d-40ca5b8b0278", 00:25:39.686 "is_configured": true, 00:25:39.686 "data_offset": 0, 00:25:39.686 "data_size": 65536 00:25:39.686 }, 00:25:39.686 { 00:25:39.686 "name": "BaseBdev4", 00:25:39.686 "uuid": "d28e1646-74d7-46d1-81e9-e74f34bad319", 00:25:39.686 "is_configured": true, 00:25:39.686 "data_offset": 0, 00:25:39.686 "data_size": 65536 00:25:39.686 } 00:25:39.686 ] 00:25:39.686 }' 00:25:39.686 17:04:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:39.686 17:04:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:39.686 17:04:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:39.686 17:04:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:39.686 17:04:28 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:39.944 [2024-11-05 17:04:28.769031] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:40.202 [2024-11-05 17:04:28.848504] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:40.202 [2024-11-05 17:04:28.848728] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:40.202 17:04:28 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:40.202 17:04:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:40.202 17:04:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:40.202 17:04:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:40.202 17:04:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:40.202 17:04:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:40.202 17:04:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:40.202 17:04:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:40.202 17:04:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:40.202 17:04:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:40.202 17:04:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.202 17:04:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.461 17:04:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:40.461 "name": "raid_bdev1", 00:25:40.461 "uuid": "0e877de2-3c13-4bd4-8e5e-adadb8d580ef", 00:25:40.461 "strip_size_kb": 64, 00:25:40.461 "state": "online", 00:25:40.461 "raid_level": "raid5f", 00:25:40.461 "superblock": false, 00:25:40.461 "num_base_bdevs": 4, 00:25:40.461 "num_base_bdevs_discovered": 3, 00:25:40.461 "num_base_bdevs_operational": 3, 00:25:40.461 "base_bdevs_list": [ 00:25:40.461 { 00:25:40.461 "name": null, 00:25:40.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.461 "is_configured": false, 00:25:40.461 "data_offset": 0, 00:25:40.461 "data_size": 65536 00:25:40.461 }, 00:25:40.461 { 00:25:40.461 "name": "BaseBdev2", 00:25:40.461 "uuid": "cd206dd1-3d0d-4b0f-99dc-1625294ea184", 00:25:40.461 "is_configured": true, 00:25:40.461 "data_offset": 0, 00:25:40.461 "data_size": 65536 00:25:40.461 }, 00:25:40.461 { 00:25:40.461 "name": "BaseBdev3", 00:25:40.461 "uuid": "84b076d7-d1d6-4a8a-973d-40ca5b8b0278", 00:25:40.461 "is_configured": true, 00:25:40.461 "data_offset": 0, 00:25:40.461 "data_size": 65536 00:25:40.461 }, 00:25:40.461 { 00:25:40.461 "name": "BaseBdev4", 00:25:40.461 "uuid": "d28e1646-74d7-46d1-81e9-e74f34bad319", 00:25:40.461 "is_configured": true, 00:25:40.461 "data_offset": 0, 00:25:40.461 "data_size": 65536 00:25:40.461 } 00:25:40.461 ] 00:25:40.461 }' 00:25:40.461 17:04:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:40.461 17:04:29 -- common/autotest_common.sh@10 -- # set +x 00:25:41.027 17:04:29 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:41.027 17:04:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:41.028 17:04:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:41.028 17:04:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:41.028 17:04:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:41.028 17:04:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.028 17:04:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.286 17:04:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:41.286 "name": "raid_bdev1", 00:25:41.286 "uuid": "0e877de2-3c13-4bd4-8e5e-adadb8d580ef", 00:25:41.286 "strip_size_kb": 64, 00:25:41.286 "state": "online", 00:25:41.286 "raid_level": "raid5f", 00:25:41.286 "superblock": false, 00:25:41.286 "num_base_bdevs": 4, 00:25:41.286 "num_base_bdevs_discovered": 3, 00:25:41.286 "num_base_bdevs_operational": 3, 00:25:41.286 "base_bdevs_list": [ 00:25:41.286 { 00:25:41.286 "name": null, 00:25:41.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.286 "is_configured": false, 00:25:41.286 "data_offset": 0, 00:25:41.286 "data_size": 65536 00:25:41.286 }, 00:25:41.286 { 00:25:41.286 "name": "BaseBdev2", 00:25:41.286 "uuid": "cd206dd1-3d0d-4b0f-99dc-1625294ea184", 00:25:41.286 "is_configured": true, 00:25:41.286 "data_offset": 0, 00:25:41.286 "data_size": 65536 00:25:41.286 }, 00:25:41.286 { 00:25:41.286 "name": "BaseBdev3", 00:25:41.286 "uuid": "84b076d7-d1d6-4a8a-973d-40ca5b8b0278", 00:25:41.286 "is_configured": true, 00:25:41.286 "data_offset": 0, 00:25:41.286 "data_size": 65536 00:25:41.286 }, 00:25:41.286 { 00:25:41.286 "name": "BaseBdev4", 00:25:41.286 "uuid": "d28e1646-74d7-46d1-81e9-e74f34bad319", 00:25:41.286 "is_configured": true, 00:25:41.286 "data_offset": 0, 00:25:41.286 "data_size": 65536 00:25:41.286 } 00:25:41.286 ] 00:25:41.286 }' 00:25:41.286 17:04:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:41.286 17:04:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:41.286 17:04:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:41.286 17:04:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:41.286 17:04:30 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:41.544 [2024-11-05 17:04:30.398116] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:41.544 [2024-11-05 17:04:30.398297] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:41.544 [2024-11-05 17:04:30.408319] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:25:41.544 [2024-11-05 17:04:30.415385] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:41.544 17:04:30 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:42.918 "name": "raid_bdev1", 00:25:42.918 "uuid": "0e877de2-3c13-4bd4-8e5e-adadb8d580ef", 00:25:42.918 "strip_size_kb": 64, 00:25:42.918 "state": "online", 00:25:42.918 "raid_level": "raid5f", 00:25:42.918 "superblock": false, 00:25:42.918 "num_base_bdevs": 4, 00:25:42.918 "num_base_bdevs_discovered": 4, 00:25:42.918 "num_base_bdevs_operational": 4, 00:25:42.918 "process": { 00:25:42.918 "type": "rebuild", 00:25:42.918 "target": "spare", 00:25:42.918 "progress": { 00:25:42.918 "blocks": 23040, 00:25:42.918 "percent": 11 00:25:42.918 } 00:25:42.918 }, 00:25:42.918 "base_bdevs_list": [ 00:25:42.918 { 00:25:42.918 "name": "spare", 00:25:42.918 "uuid": "f6cb5ebc-c089-5cd9-9f5a-f30e168bc1ba", 00:25:42.918 "is_configured": true, 00:25:42.918 "data_offset": 0, 00:25:42.918 "data_size": 65536 00:25:42.918 }, 00:25:42.918 { 00:25:42.918 "name": "BaseBdev2", 00:25:42.918 "uuid": "cd206dd1-3d0d-4b0f-99dc-1625294ea184", 00:25:42.918 "is_configured": true, 00:25:42.918 "data_offset": 0, 00:25:42.918 "data_size": 65536 00:25:42.918 }, 00:25:42.918 { 00:25:42.918 "name": "BaseBdev3", 00:25:42.918 "uuid": "84b076d7-d1d6-4a8a-973d-40ca5b8b0278", 00:25:42.918 "is_configured": true, 00:25:42.918 "data_offset": 0, 00:25:42.918 "data_size": 65536 00:25:42.918 }, 00:25:42.918 { 00:25:42.918 "name": "BaseBdev4", 00:25:42.918 "uuid": "d28e1646-74d7-46d1-81e9-e74f34bad319", 00:25:42.918 "is_configured": true, 00:25:42.918 "data_offset": 0, 00:25:42.918 "data_size": 65536 00:25:42.918 } 00:25:42.918 ] 00:25:42.918 }' 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@657 -- # local timeout=711 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.918 17:04:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:43.177 17:04:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:43.177 "name": "raid_bdev1", 00:25:43.177 "uuid": "0e877de2-3c13-4bd4-8e5e-adadb8d580ef", 00:25:43.177 "strip_size_kb": 64, 00:25:43.177 "state": "online", 00:25:43.177 "raid_level": "raid5f", 00:25:43.177 "superblock": false, 00:25:43.177 "num_base_bdevs": 4, 00:25:43.177 "num_base_bdevs_discovered": 4, 00:25:43.177 "num_base_bdevs_operational": 4, 00:25:43.177 "process": { 00:25:43.177 "type": "rebuild", 00:25:43.177 "target": "spare", 00:25:43.177 "progress": { 00:25:43.177 "blocks": 28800, 00:25:43.177 "percent": 14 00:25:43.177 } 00:25:43.177 }, 00:25:43.177 "base_bdevs_list": [ 00:25:43.177 { 00:25:43.177 "name": "spare", 00:25:43.177 "uuid": "f6cb5ebc-c089-5cd9-9f5a-f30e168bc1ba", 00:25:43.177 "is_configured": true, 00:25:43.177 "data_offset": 0, 00:25:43.177 "data_size": 65536 00:25:43.177 }, 00:25:43.177 { 00:25:43.177 "name": "BaseBdev2", 00:25:43.177 "uuid": "cd206dd1-3d0d-4b0f-99dc-1625294ea184", 00:25:43.177 "is_configured": true, 00:25:43.177 "data_offset": 0, 00:25:43.177 "data_size": 65536 00:25:43.177 }, 00:25:43.177 { 00:25:43.177 "name": "BaseBdev3", 00:25:43.177 "uuid": "84b076d7-d1d6-4a8a-973d-40ca5b8b0278", 00:25:43.177 "is_configured": true, 00:25:43.177 "data_offset": 0, 00:25:43.177 "data_size": 65536 00:25:43.177 }, 00:25:43.177 { 00:25:43.177 "name": "BaseBdev4", 00:25:43.177 "uuid": "d28e1646-74d7-46d1-81e9-e74f34bad319", 00:25:43.177 "is_configured": true, 00:25:43.177 "data_offset": 0, 00:25:43.177 "data_size": 65536 00:25:43.177 } 00:25:43.177 ] 00:25:43.177 }' 00:25:43.177 17:04:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:43.177 17:04:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:43.177 17:04:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:43.435 17:04:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:43.435 17:04:32 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:44.369 17:04:33 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:44.369 17:04:33 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:44.369 17:04:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:44.369 17:04:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:44.369 17:04:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:44.369 17:04:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:44.369 17:04:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.369 17:04:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.627 17:04:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:44.627 "name": "raid_bdev1", 00:25:44.627 "uuid": "0e877de2-3c13-4bd4-8e5e-adadb8d580ef", 00:25:44.627 "strip_size_kb": 64, 00:25:44.627 "state": "online", 00:25:44.627 "raid_level": "raid5f", 00:25:44.627 "superblock": false, 00:25:44.627 "num_base_bdevs": 4, 00:25:44.627 "num_base_bdevs_discovered": 4, 00:25:44.627 "num_base_bdevs_operational": 4, 00:25:44.627 "process": { 00:25:44.627 "type": "rebuild", 00:25:44.627 "target": "spare", 00:25:44.627 "progress": { 00:25:44.627 "blocks": 53760, 00:25:44.627 "percent": 27 00:25:44.627 } 00:25:44.627 }, 00:25:44.627 "base_bdevs_list": [ 00:25:44.627 { 00:25:44.627 "name": "spare", 00:25:44.627 "uuid": "f6cb5ebc-c089-5cd9-9f5a-f30e168bc1ba", 00:25:44.627 "is_configured": true, 00:25:44.627 "data_offset": 0, 00:25:44.627 "data_size": 65536 00:25:44.627 }, 00:25:44.627 { 00:25:44.627 "name": "BaseBdev2", 00:25:44.627 "uuid": "cd206dd1-3d0d-4b0f-99dc-1625294ea184", 00:25:44.627 "is_configured": true, 00:25:44.627 "data_offset": 0, 00:25:44.627 "data_size": 65536 00:25:44.627 }, 00:25:44.627 { 00:25:44.627 "name": "BaseBdev3", 00:25:44.627 "uuid": "84b076d7-d1d6-4a8a-973d-40ca5b8b0278", 00:25:44.627 "is_configured": true, 00:25:44.627 "data_offset": 0, 00:25:44.627 "data_size": 65536 00:25:44.627 }, 00:25:44.627 { 00:25:44.627 "name": "BaseBdev4", 00:25:44.627 "uuid": "d28e1646-74d7-46d1-81e9-e74f34bad319", 00:25:44.627 "is_configured": true, 00:25:44.627 "data_offset": 0, 00:25:44.627 "data_size": 65536 00:25:44.627 } 00:25:44.627 ] 00:25:44.627 }' 00:25:44.627 17:04:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:44.627 17:04:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:44.627 17:04:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:44.627 17:04:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:44.627 17:04:33 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:45.608 17:04:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:45.608 17:04:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:45.608 17:04:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:45.608 17:04:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:45.608 17:04:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:45.608 17:04:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:45.608 17:04:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.608 17:04:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.867 17:04:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:45.867 "name": "raid_bdev1", 00:25:45.867 "uuid": "0e877de2-3c13-4bd4-8e5e-adadb8d580ef", 00:25:45.867 "strip_size_kb": 64, 00:25:45.867 "state": "online", 00:25:45.867 "raid_level": "raid5f", 00:25:45.867 "superblock": false, 00:25:45.867 "num_base_bdevs": 4, 00:25:45.867 "num_base_bdevs_discovered": 4, 00:25:45.867 "num_base_bdevs_operational": 4, 00:25:45.867 "process": { 00:25:45.867 "type": "rebuild", 00:25:45.867 "target": "spare", 00:25:45.867 "progress": { 00:25:45.867 "blocks": 80640, 00:25:45.867 "percent": 41 00:25:45.867 } 00:25:45.867 }, 00:25:45.867 "base_bdevs_list": [ 00:25:45.867 { 00:25:45.867 "name": "spare", 00:25:45.867 "uuid": "f6cb5ebc-c089-5cd9-9f5a-f30e168bc1ba", 00:25:45.867 "is_configured": true, 00:25:45.867 "data_offset": 0, 00:25:45.867 "data_size": 65536 00:25:45.867 }, 00:25:45.867 { 00:25:45.867 "name": "BaseBdev2", 00:25:45.867 "uuid": "cd206dd1-3d0d-4b0f-99dc-1625294ea184", 00:25:45.867 "is_configured": true, 00:25:45.867 "data_offset": 0, 00:25:45.867 "data_size": 65536 00:25:45.867 }, 00:25:45.867 { 00:25:45.867 "name": "BaseBdev3", 00:25:45.867 "uuid": "84b076d7-d1d6-4a8a-973d-40ca5b8b0278", 00:25:45.867 "is_configured": true, 00:25:45.867 "data_offset": 0, 00:25:45.867 "data_size": 65536 00:25:45.867 }, 00:25:45.867 { 00:25:45.867 "name": "BaseBdev4", 00:25:45.867 "uuid": "d28e1646-74d7-46d1-81e9-e74f34bad319", 00:25:45.867 "is_configured": true, 00:25:45.867 "data_offset": 0, 00:25:45.867 "data_size": 65536 00:25:45.867 } 00:25:45.867 ] 00:25:45.867 }' 00:25:45.867 17:04:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:45.867 17:04:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:45.867 17:04:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:46.125 17:04:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:46.125 17:04:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:47.059 17:04:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:47.059 17:04:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:47.059 17:04:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:47.059 17:04:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:47.059 17:04:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:47.059 17:04:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:47.059 17:04:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.059 17:04:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.318 17:04:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:47.318 "name": "raid_bdev1", 00:25:47.318 "uuid": "0e877de2-3c13-4bd4-8e5e-adadb8d580ef", 00:25:47.318 "strip_size_kb": 64, 00:25:47.318 "state": "online", 00:25:47.318 "raid_level": "raid5f", 00:25:47.318 "superblock": false, 00:25:47.318 "num_base_bdevs": 4, 00:25:47.318 "num_base_bdevs_discovered": 4, 00:25:47.318 "num_base_bdevs_operational": 4, 00:25:47.318 "process": { 00:25:47.318 "type": "rebuild", 00:25:47.318 "target": "spare", 00:25:47.318 "progress": { 00:25:47.318 "blocks": 105600, 00:25:47.318 "percent": 53 00:25:47.318 } 00:25:47.318 }, 00:25:47.318 "base_bdevs_list": [ 00:25:47.318 { 00:25:47.318 "name": "spare", 00:25:47.318 "uuid": "f6cb5ebc-c089-5cd9-9f5a-f30e168bc1ba", 00:25:47.318 "is_configured": true, 00:25:47.318 "data_offset": 0, 00:25:47.318 "data_size": 65536 00:25:47.318 }, 00:25:47.318 { 00:25:47.318 "name": "BaseBdev2", 00:25:47.318 "uuid": "cd206dd1-3d0d-4b0f-99dc-1625294ea184", 00:25:47.318 "is_configured": true, 00:25:47.318 "data_offset": 0, 00:25:47.318 "data_size": 65536 00:25:47.318 }, 00:25:47.318 { 00:25:47.318 "name": "BaseBdev3", 00:25:47.318 "uuid": "84b076d7-d1d6-4a8a-973d-40ca5b8b0278", 00:25:47.318 "is_configured": true, 00:25:47.318 "data_offset": 0, 00:25:47.318 "data_size": 65536 00:25:47.318 }, 00:25:47.318 { 00:25:47.318 "name": "BaseBdev4", 00:25:47.318 "uuid": "d28e1646-74d7-46d1-81e9-e74f34bad319", 00:25:47.318 "is_configured": true, 00:25:47.318 "data_offset": 0, 00:25:47.318 "data_size": 65536 00:25:47.318 } 00:25:47.318 ] 00:25:47.318 }' 00:25:47.318 17:04:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:47.318 17:04:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:47.318 17:04:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:47.318 17:04:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:47.318 17:04:36 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:48.252 17:04:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:48.252 17:04:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:48.252 17:04:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:48.252 17:04:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:48.252 17:04:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:48.252 17:04:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:48.252 17:04:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.252 17:04:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.509 17:04:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:48.509 "name": "raid_bdev1", 00:25:48.509 "uuid": "0e877de2-3c13-4bd4-8e5e-adadb8d580ef", 00:25:48.509 "strip_size_kb": 64, 00:25:48.509 "state": "online", 00:25:48.509 "raid_level": "raid5f", 00:25:48.509 "superblock": false, 00:25:48.509 "num_base_bdevs": 4, 00:25:48.509 "num_base_bdevs_discovered": 4, 00:25:48.509 "num_base_bdevs_operational": 4, 00:25:48.509 "process": { 00:25:48.509 "type": "rebuild", 00:25:48.509 "target": "spare", 00:25:48.510 "progress": { 00:25:48.510 "blocks": 130560, 00:25:48.510 "percent": 66 00:25:48.510 } 00:25:48.510 }, 00:25:48.510 "base_bdevs_list": [ 00:25:48.510 { 00:25:48.510 "name": "spare", 00:25:48.510 "uuid": "f6cb5ebc-c089-5cd9-9f5a-f30e168bc1ba", 00:25:48.510 "is_configured": true, 00:25:48.510 "data_offset": 0, 00:25:48.510 "data_size": 65536 00:25:48.510 }, 00:25:48.510 { 00:25:48.510 "name": "BaseBdev2", 00:25:48.510 "uuid": "cd206dd1-3d0d-4b0f-99dc-1625294ea184", 00:25:48.510 "is_configured": true, 00:25:48.510 "data_offset": 0, 00:25:48.510 "data_size": 65536 00:25:48.510 }, 00:25:48.510 { 00:25:48.510 "name": "BaseBdev3", 00:25:48.510 "uuid": "84b076d7-d1d6-4a8a-973d-40ca5b8b0278", 00:25:48.510 "is_configured": true, 00:25:48.510 "data_offset": 0, 00:25:48.510 "data_size": 65536 00:25:48.510 }, 00:25:48.510 { 00:25:48.510 "name": "BaseBdev4", 00:25:48.510 "uuid": "d28e1646-74d7-46d1-81e9-e74f34bad319", 00:25:48.510 "is_configured": true, 00:25:48.510 "data_offset": 0, 00:25:48.510 "data_size": 65536 00:25:48.510 } 00:25:48.510 ] 00:25:48.510 }' 00:25:48.510 17:04:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:48.510 17:04:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:48.510 17:04:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:48.767 17:04:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:48.767 17:04:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:49.701 17:04:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:49.701 17:04:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:49.701 17:04:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:49.701 17:04:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:49.701 17:04:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:49.701 17:04:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:49.701 17:04:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.701 17:04:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.960 17:04:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:49.960 "name": "raid_bdev1", 00:25:49.960 "uuid": "0e877de2-3c13-4bd4-8e5e-adadb8d580ef", 00:25:49.960 "strip_size_kb": 64, 00:25:49.960 "state": "online", 00:25:49.960 "raid_level": "raid5f", 00:25:49.960 "superblock": false, 00:25:49.960 "num_base_bdevs": 4, 00:25:49.960 "num_base_bdevs_discovered": 4, 00:25:49.960 "num_base_bdevs_operational": 4, 00:25:49.960 "process": { 00:25:49.960 "type": "rebuild", 00:25:49.960 "target": "spare", 00:25:49.960 "progress": { 00:25:49.960 "blocks": 155520, 00:25:49.960 "percent": 79 00:25:49.960 } 00:25:49.960 }, 00:25:49.960 "base_bdevs_list": [ 00:25:49.960 { 00:25:49.960 "name": "spare", 00:25:49.960 "uuid": "f6cb5ebc-c089-5cd9-9f5a-f30e168bc1ba", 00:25:49.960 "is_configured": true, 00:25:49.960 "data_offset": 0, 00:25:49.960 "data_size": 65536 00:25:49.960 }, 00:25:49.960 { 00:25:49.960 "name": "BaseBdev2", 00:25:49.960 "uuid": "cd206dd1-3d0d-4b0f-99dc-1625294ea184", 00:25:49.960 "is_configured": true, 00:25:49.960 "data_offset": 0, 00:25:49.960 "data_size": 65536 00:25:49.960 }, 00:25:49.960 { 00:25:49.960 "name": "BaseBdev3", 00:25:49.960 "uuid": "84b076d7-d1d6-4a8a-973d-40ca5b8b0278", 00:25:49.960 "is_configured": true, 00:25:49.960 "data_offset": 0, 00:25:49.960 "data_size": 65536 00:25:49.960 }, 00:25:49.960 { 00:25:49.960 "name": "BaseBdev4", 00:25:49.960 "uuid": "d28e1646-74d7-46d1-81e9-e74f34bad319", 00:25:49.960 "is_configured": true, 00:25:49.960 "data_offset": 0, 00:25:49.960 "data_size": 65536 00:25:49.960 } 00:25:49.960 ] 00:25:49.960 }' 00:25:49.960 17:04:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:49.960 17:04:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:49.960 17:04:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:49.960 17:04:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:49.960 17:04:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:50.895 17:04:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:50.895 17:04:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:50.895 17:04:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:50.895 17:04:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:50.895 17:04:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:50.895 17:04:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:50.895 17:04:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.895 17:04:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.152 17:04:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:51.152 "name": "raid_bdev1", 00:25:51.152 "uuid": "0e877de2-3c13-4bd4-8e5e-adadb8d580ef", 00:25:51.152 "strip_size_kb": 64, 00:25:51.152 "state": "online", 00:25:51.152 "raid_level": "raid5f", 00:25:51.152 "superblock": false, 00:25:51.152 "num_base_bdevs": 4, 00:25:51.152 "num_base_bdevs_discovered": 4, 00:25:51.152 "num_base_bdevs_operational": 4, 00:25:51.152 "process": { 00:25:51.152 "type": "rebuild", 00:25:51.152 "target": "spare", 00:25:51.152 "progress": { 00:25:51.152 "blocks": 182400, 00:25:51.152 "percent": 92 00:25:51.152 } 00:25:51.152 }, 00:25:51.152 "base_bdevs_list": [ 00:25:51.152 { 00:25:51.152 "name": "spare", 00:25:51.152 "uuid": "f6cb5ebc-c089-5cd9-9f5a-f30e168bc1ba", 00:25:51.152 "is_configured": true, 00:25:51.152 "data_offset": 0, 00:25:51.152 "data_size": 65536 00:25:51.152 }, 00:25:51.152 { 00:25:51.152 "name": "BaseBdev2", 00:25:51.152 "uuid": "cd206dd1-3d0d-4b0f-99dc-1625294ea184", 00:25:51.152 "is_configured": true, 00:25:51.152 "data_offset": 0, 00:25:51.152 "data_size": 65536 00:25:51.152 }, 00:25:51.152 { 00:25:51.152 "name": "BaseBdev3", 00:25:51.152 "uuid": "84b076d7-d1d6-4a8a-973d-40ca5b8b0278", 00:25:51.152 "is_configured": true, 00:25:51.152 "data_offset": 0, 00:25:51.152 "data_size": 65536 00:25:51.152 }, 00:25:51.152 { 00:25:51.152 "name": "BaseBdev4", 00:25:51.152 "uuid": "d28e1646-74d7-46d1-81e9-e74f34bad319", 00:25:51.152 "is_configured": true, 00:25:51.152 "data_offset": 0, 00:25:51.152 "data_size": 65536 00:25:51.152 } 00:25:51.152 ] 00:25:51.152 }' 00:25:51.152 17:04:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:51.152 17:04:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:51.410 17:04:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:51.410 17:04:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:51.410 17:04:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:51.976 [2024-11-05 17:04:40.776218] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:51.976 [2024-11-05 17:04:40.776445] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:51.976 [2024-11-05 17:04:40.776628] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:52.233 17:04:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:52.233 17:04:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:52.233 17:04:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:52.233 17:04:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:52.233 17:04:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:52.233 17:04:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:52.233 17:04:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.233 17:04:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.492 17:04:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:52.492 "name": "raid_bdev1", 00:25:52.492 "uuid": "0e877de2-3c13-4bd4-8e5e-adadb8d580ef", 00:25:52.492 "strip_size_kb": 64, 00:25:52.492 "state": "online", 00:25:52.492 "raid_level": "raid5f", 00:25:52.492 "superblock": false, 00:25:52.492 "num_base_bdevs": 4, 00:25:52.492 "num_base_bdevs_discovered": 4, 00:25:52.492 "num_base_bdevs_operational": 4, 00:25:52.492 "base_bdevs_list": [ 00:25:52.492 { 00:25:52.492 "name": "spare", 00:25:52.492 "uuid": "f6cb5ebc-c089-5cd9-9f5a-f30e168bc1ba", 00:25:52.492 "is_configured": true, 00:25:52.492 "data_offset": 0, 00:25:52.492 "data_size": 65536 00:25:52.492 }, 00:25:52.492 { 00:25:52.492 "name": "BaseBdev2", 00:25:52.492 "uuid": "cd206dd1-3d0d-4b0f-99dc-1625294ea184", 00:25:52.492 "is_configured": true, 00:25:52.492 "data_offset": 0, 00:25:52.492 "data_size": 65536 00:25:52.492 }, 00:25:52.492 { 00:25:52.492 "name": "BaseBdev3", 00:25:52.492 "uuid": "84b076d7-d1d6-4a8a-973d-40ca5b8b0278", 00:25:52.492 "is_configured": true, 00:25:52.492 "data_offset": 0, 00:25:52.492 "data_size": 65536 00:25:52.492 }, 00:25:52.492 { 00:25:52.492 "name": "BaseBdev4", 00:25:52.492 "uuid": "d28e1646-74d7-46d1-81e9-e74f34bad319", 00:25:52.492 "is_configured": true, 00:25:52.492 "data_offset": 0, 00:25:52.492 "data_size": 65536 00:25:52.492 } 00:25:52.492 ] 00:25:52.492 }' 00:25:52.492 17:04:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:52.750 17:04:41 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:52.750 17:04:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:52.750 17:04:41 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:52.750 17:04:41 -- bdev/bdev_raid.sh@660 -- # break 00:25:52.750 17:04:41 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:52.750 17:04:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:52.750 17:04:41 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:52.750 17:04:41 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:52.750 17:04:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:52.750 17:04:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.750 17:04:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.008 17:04:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:53.008 "name": "raid_bdev1", 00:25:53.008 "uuid": "0e877de2-3c13-4bd4-8e5e-adadb8d580ef", 00:25:53.008 "strip_size_kb": 64, 00:25:53.008 "state": "online", 00:25:53.008 "raid_level": "raid5f", 00:25:53.008 "superblock": false, 00:25:53.008 "num_base_bdevs": 4, 00:25:53.008 "num_base_bdevs_discovered": 4, 00:25:53.008 "num_base_bdevs_operational": 4, 00:25:53.008 "base_bdevs_list": [ 00:25:53.008 { 00:25:53.008 "name": "spare", 00:25:53.008 "uuid": "f6cb5ebc-c089-5cd9-9f5a-f30e168bc1ba", 00:25:53.008 "is_configured": true, 00:25:53.009 "data_offset": 0, 00:25:53.009 "data_size": 65536 00:25:53.009 }, 00:25:53.009 { 00:25:53.009 "name": "BaseBdev2", 00:25:53.009 "uuid": "cd206dd1-3d0d-4b0f-99dc-1625294ea184", 00:25:53.009 "is_configured": true, 00:25:53.009 "data_offset": 0, 00:25:53.009 "data_size": 65536 00:25:53.009 }, 00:25:53.009 { 00:25:53.009 "name": "BaseBdev3", 00:25:53.009 "uuid": "84b076d7-d1d6-4a8a-973d-40ca5b8b0278", 00:25:53.009 "is_configured": true, 00:25:53.009 "data_offset": 0, 00:25:53.009 "data_size": 65536 00:25:53.009 }, 00:25:53.009 { 00:25:53.009 "name": "BaseBdev4", 00:25:53.009 "uuid": "d28e1646-74d7-46d1-81e9-e74f34bad319", 00:25:53.009 "is_configured": true, 00:25:53.009 "data_offset": 0, 00:25:53.009 "data_size": 65536 00:25:53.009 } 00:25:53.009 ] 00:25:53.009 }' 00:25:53.009 17:04:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:53.009 17:04:41 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:53.009 17:04:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:53.009 17:04:41 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:53.009 17:04:41 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:53.009 17:04:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:53.009 17:04:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:53.009 17:04:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:53.009 17:04:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:53.009 17:04:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:53.009 17:04:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:53.009 17:04:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:53.009 17:04:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:53.009 17:04:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:53.009 17:04:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.009 17:04:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.267 17:04:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:53.267 "name": "raid_bdev1", 00:25:53.267 "uuid": "0e877de2-3c13-4bd4-8e5e-adadb8d580ef", 00:25:53.267 "strip_size_kb": 64, 00:25:53.267 "state": "online", 00:25:53.267 "raid_level": "raid5f", 00:25:53.267 "superblock": false, 00:25:53.267 "num_base_bdevs": 4, 00:25:53.267 "num_base_bdevs_discovered": 4, 00:25:53.267 "num_base_bdevs_operational": 4, 00:25:53.267 "base_bdevs_list": [ 00:25:53.267 { 00:25:53.267 "name": "spare", 00:25:53.267 "uuid": "f6cb5ebc-c089-5cd9-9f5a-f30e168bc1ba", 00:25:53.267 "is_configured": true, 00:25:53.267 "data_offset": 0, 00:25:53.267 "data_size": 65536 00:25:53.267 }, 00:25:53.267 { 00:25:53.267 "name": "BaseBdev2", 00:25:53.267 "uuid": "cd206dd1-3d0d-4b0f-99dc-1625294ea184", 00:25:53.267 "is_configured": true, 00:25:53.267 "data_offset": 0, 00:25:53.267 "data_size": 65536 00:25:53.267 }, 00:25:53.267 { 00:25:53.267 "name": "BaseBdev3", 00:25:53.267 "uuid": "84b076d7-d1d6-4a8a-973d-40ca5b8b0278", 00:25:53.267 "is_configured": true, 00:25:53.267 "data_offset": 0, 00:25:53.267 "data_size": 65536 00:25:53.267 }, 00:25:53.267 { 00:25:53.267 "name": "BaseBdev4", 00:25:53.267 "uuid": "d28e1646-74d7-46d1-81e9-e74f34bad319", 00:25:53.267 "is_configured": true, 00:25:53.267 "data_offset": 0, 00:25:53.267 "data_size": 65536 00:25:53.267 } 00:25:53.267 ] 00:25:53.267 }' 00:25:53.267 17:04:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:53.267 17:04:42 -- common/autotest_common.sh@10 -- # set +x 00:25:53.833 17:04:42 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:54.091 [2024-11-05 17:04:42.867580] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:54.091 [2024-11-05 17:04:42.867736] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:54.091 [2024-11-05 17:04:42.867901] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:54.091 [2024-11-05 17:04:42.868118] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:54.091 [2024-11-05 17:04:42.868232] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:25:54.091 17:04:42 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.091 17:04:42 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:54.349 17:04:43 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:54.349 17:04:43 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:54.349 17:04:43 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:54.349 17:04:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:54.349 17:04:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:54.349 17:04:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:54.349 17:04:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:54.349 17:04:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:54.349 17:04:43 -- bdev/nbd_common.sh@12 -- # local i 00:25:54.349 17:04:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:54.349 17:04:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:54.349 17:04:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:54.608 /dev/nbd0 00:25:54.608 17:04:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:54.608 17:04:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:54.608 17:04:43 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:54.608 17:04:43 -- common/autotest_common.sh@867 -- # local i 00:25:54.608 17:04:43 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:54.608 17:04:43 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:54.608 17:04:43 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:54.608 17:04:43 -- common/autotest_common.sh@871 -- # break 00:25:54.608 17:04:43 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:54.608 17:04:43 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:54.608 17:04:43 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:54.608 1+0 records in 00:25:54.608 1+0 records out 00:25:54.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527023 s, 7.8 MB/s 00:25:54.608 17:04:43 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:54.608 17:04:43 -- common/autotest_common.sh@884 -- # size=4096 00:25:54.608 17:04:43 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:54.608 17:04:43 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:54.608 17:04:43 -- common/autotest_common.sh@887 -- # return 0 00:25:54.608 17:04:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:54.608 17:04:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:54.608 17:04:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:54.866 /dev/nbd1 00:25:54.866 17:04:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:54.866 17:04:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:54.866 17:04:43 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:54.866 17:04:43 -- common/autotest_common.sh@867 -- # local i 00:25:54.866 17:04:43 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:54.866 17:04:43 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:54.866 17:04:43 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:54.866 17:04:43 -- common/autotest_common.sh@871 -- # break 00:25:54.866 17:04:43 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:54.866 17:04:43 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:54.866 17:04:43 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:54.866 1+0 records in 00:25:54.866 1+0 records out 00:25:54.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562131 s, 7.3 MB/s 00:25:54.866 17:04:43 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:54.866 17:04:43 -- common/autotest_common.sh@884 -- # size=4096 00:25:54.866 17:04:43 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:54.866 17:04:43 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:54.866 17:04:43 -- common/autotest_common.sh@887 -- # return 0 00:25:54.866 17:04:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:54.866 17:04:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:54.866 17:04:43 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:55.124 17:04:43 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:55.124 17:04:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:55.124 17:04:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:55.124 17:04:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:55.124 17:04:43 -- bdev/nbd_common.sh@51 -- # local i 00:25:55.124 17:04:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:55.124 17:04:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:55.382 17:04:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:55.382 17:04:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:55.382 17:04:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:55.382 17:04:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:55.382 17:04:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:55.382 17:04:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:55.382 17:04:44 -- bdev/nbd_common.sh@41 -- # break 00:25:55.382 17:04:44 -- bdev/nbd_common.sh@45 -- # return 0 00:25:55.382 17:04:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:55.382 17:04:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:55.639 17:04:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:55.639 17:04:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:55.639 17:04:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:55.639 17:04:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:55.640 17:04:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:55.640 17:04:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:55.640 17:04:44 -- bdev/nbd_common.sh@41 -- # break 00:25:55.640 17:04:44 -- bdev/nbd_common.sh@45 -- # return 0 00:25:55.640 17:04:44 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:25:55.640 17:04:44 -- bdev/bdev_raid.sh@709 -- # killprocess 131056 00:25:55.640 17:04:44 -- common/autotest_common.sh@936 -- # '[' -z 131056 ']' 00:25:55.640 17:04:44 -- common/autotest_common.sh@940 -- # kill -0 131056 00:25:55.640 17:04:44 -- common/autotest_common.sh@941 -- # uname 00:25:55.640 17:04:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:55.640 17:04:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131056 00:25:55.640 17:04:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:55.640 17:04:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:55.640 17:04:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131056' 00:25:55.640 killing process with pid 131056 00:25:55.640 17:04:44 -- common/autotest_common.sh@955 -- # kill 131056 00:25:55.640 Received shutdown signal, test time was about 60.000000 seconds 00:25:55.640 00:25:55.640 Latency(us) 00:25:55.640 [2024-11-05T17:04:44.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.640 [2024-11-05T17:04:44.517Z] =================================================================================================================== 00:25:55.640 [2024-11-05T17:04:44.517Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:55.640 17:04:44 -- common/autotest_common.sh@960 -- # wait 131056 00:25:55.640 [2024-11-05 17:04:44.417209] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:55.898 [2024-11-05 17:04:44.732507] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:56.831 ************************************ 00:25:56.831 END TEST raid5f_rebuild_test 00:25:56.831 ************************************ 00:25:56.831 17:04:45 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:56.831 00:25:56.831 real 0m24.870s 00:25:56.831 user 0m36.408s 00:25:56.831 sys 0m2.524s 00:25:56.831 17:04:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:56.831 17:04:45 -- common/autotest_common.sh@10 -- # set +x 00:25:56.831 17:04:45 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:25:56.831 17:04:45 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:25:56.831 17:04:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:56.831 17:04:45 -- common/autotest_common.sh@10 -- # set +x 00:25:56.831 ************************************ 00:25:56.831 START TEST raid5f_rebuild_test_sb 00:25:56.831 ************************************ 00:25:56.831 17:04:45 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 true false 00:25:56.831 17:04:45 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:56.831 17:04:45 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@544 -- # raid_pid=131666 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:56.832 17:04:45 -- bdev/bdev_raid.sh@545 -- # waitforlisten 131666 /var/tmp/spdk-raid.sock 00:25:56.832 17:04:45 -- common/autotest_common.sh@829 -- # '[' -z 131666 ']' 00:25:56.832 17:04:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:56.832 17:04:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:56.832 17:04:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:56.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:56.832 17:04:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:56.832 17:04:45 -- common/autotest_common.sh@10 -- # set +x 00:25:57.090 [2024-11-05 17:04:45.767097] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:57.090 [2024-11-05 17:04:45.767469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131666 ] 00:25:57.090 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:57.090 Zero copy mechanism will not be used. 00:25:57.090 [2024-11-05 17:04:45.920533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.348 [2024-11-05 17:04:46.084330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.606 [2024-11-05 17:04:46.248740] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:57.863 17:04:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:57.863 17:04:46 -- common/autotest_common.sh@862 -- # return 0 00:25:57.863 17:04:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:57.863 17:04:46 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:57.863 17:04:46 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:58.121 BaseBdev1_malloc 00:25:58.121 17:04:46 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:58.379 [2024-11-05 17:04:47.147229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:58.379 [2024-11-05 17:04:47.147464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:58.379 [2024-11-05 17:04:47.147618] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:25:58.379 [2024-11-05 17:04:47.147758] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:58.379 [2024-11-05 17:04:47.149943] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:58.379 [2024-11-05 17:04:47.150118] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:58.379 BaseBdev1 00:25:58.379 17:04:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:58.379 17:04:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:58.379 17:04:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:58.636 BaseBdev2_malloc 00:25:58.636 17:04:47 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:58.895 [2024-11-05 17:04:47.605608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:58.895 [2024-11-05 17:04:47.605840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:58.895 [2024-11-05 17:04:47.605920] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:58.895 [2024-11-05 17:04:47.606246] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:58.895 [2024-11-05 17:04:47.608301] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:58.895 [2024-11-05 17:04:47.608473] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:58.895 BaseBdev2 00:25:58.895 17:04:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:58.895 17:04:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:58.895 17:04:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:59.153 BaseBdev3_malloc 00:25:59.153 17:04:47 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:59.153 [2024-11-05 17:04:48.046835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:59.153 [2024-11-05 17:04:48.047109] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:59.153 [2024-11-05 17:04:48.047290] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:59.153 [2024-11-05 17:04:48.047531] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:59.153 [2024-11-05 17:04:48.049855] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:59.153 [2024-11-05 17:04:48.050117] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:59.410 BaseBdev3 00:25:59.410 17:04:48 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:59.410 17:04:48 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:59.410 17:04:48 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:59.410 BaseBdev4_malloc 00:25:59.410 17:04:48 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:59.669 [2024-11-05 17:04:48.509068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:59.669 [2024-11-05 17:04:48.509264] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:59.669 [2024-11-05 17:04:48.509336] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:25:59.669 [2024-11-05 17:04:48.509664] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:59.669 [2024-11-05 17:04:48.511905] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:59.669 [2024-11-05 17:04:48.512082] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:59.669 BaseBdev4 00:25:59.669 17:04:48 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:59.952 spare_malloc 00:25:59.952 17:04:48 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:00.222 spare_delay 00:26:00.222 17:04:48 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:00.480 [2024-11-05 17:04:49.134368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:00.480 [2024-11-05 17:04:49.134587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.480 [2024-11-05 17:04:49.134655] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:26:00.480 [2024-11-05 17:04:49.134974] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.480 [2024-11-05 17:04:49.137175] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.480 [2024-11-05 17:04:49.137347] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:00.480 spare 00:26:00.480 17:04:49 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:26:00.480 [2024-11-05 17:04:49.366523] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:00.480 [2024-11-05 17:04:49.368419] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:00.480 [2024-11-05 17:04:49.368614] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:00.480 [2024-11-05 17:04:49.368784] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:00.480 [2024-11-05 17:04:49.369049] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:26:00.480 [2024-11-05 17:04:49.369174] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:00.480 [2024-11-05 17:04:49.369317] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:26:00.480 [2024-11-05 17:04:49.374805] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:26:00.481 [2024-11-05 17:04:49.374997] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:26:00.481 [2024-11-05 17:04:49.375268] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:00.739 17:04:49 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:00.739 17:04:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:00.739 17:04:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:00.739 17:04:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:00.739 17:04:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:00.739 17:04:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:00.739 17:04:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:00.739 17:04:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:00.739 17:04:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:00.739 17:04:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:00.739 17:04:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.739 17:04:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.739 17:04:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:00.739 "name": "raid_bdev1", 00:26:00.739 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:00.739 "strip_size_kb": 64, 00:26:00.739 "state": "online", 00:26:00.739 "raid_level": "raid5f", 00:26:00.739 "superblock": true, 00:26:00.739 "num_base_bdevs": 4, 00:26:00.739 "num_base_bdevs_discovered": 4, 00:26:00.739 "num_base_bdevs_operational": 4, 00:26:00.739 "base_bdevs_list": [ 00:26:00.739 { 00:26:00.739 "name": "BaseBdev1", 00:26:00.739 "uuid": "9c3663a3-4407-56bb-a8a9-3d19d2b0f1bc", 00:26:00.739 "is_configured": true, 00:26:00.739 "data_offset": 2048, 00:26:00.739 "data_size": 63488 00:26:00.739 }, 00:26:00.739 { 00:26:00.739 "name": "BaseBdev2", 00:26:00.739 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:00.739 "is_configured": true, 00:26:00.739 "data_offset": 2048, 00:26:00.739 "data_size": 63488 00:26:00.739 }, 00:26:00.739 { 00:26:00.739 "name": "BaseBdev3", 00:26:00.739 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:00.739 "is_configured": true, 00:26:00.739 "data_offset": 2048, 00:26:00.739 "data_size": 63488 00:26:00.739 }, 00:26:00.739 { 00:26:00.739 "name": "BaseBdev4", 00:26:00.739 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:00.739 "is_configured": true, 00:26:00.739 "data_offset": 2048, 00:26:00.739 "data_size": 63488 00:26:00.739 } 00:26:00.739 ] 00:26:00.739 }' 00:26:00.739 17:04:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:00.739 17:04:49 -- common/autotest_common.sh@10 -- # set +x 00:26:01.304 17:04:50 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:01.563 17:04:50 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:26:01.563 [2024-11-05 17:04:50.369546] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:01.563 17:04:50 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:26:01.563 17:04:50 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.563 17:04:50 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:01.821 17:04:50 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:26:01.821 17:04:50 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:26:01.821 17:04:50 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:26:01.821 17:04:50 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:01.821 17:04:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:01.821 17:04:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:01.821 17:04:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:01.821 17:04:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:01.821 17:04:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:01.821 17:04:50 -- bdev/nbd_common.sh@12 -- # local i 00:26:01.821 17:04:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:01.821 17:04:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:01.821 17:04:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:02.079 [2024-11-05 17:04:50.817492] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:02.079 /dev/nbd0 00:26:02.079 17:04:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:02.079 17:04:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:02.079 17:04:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:02.079 17:04:50 -- common/autotest_common.sh@867 -- # local i 00:26:02.079 17:04:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:02.079 17:04:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:02.079 17:04:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:02.079 17:04:50 -- common/autotest_common.sh@871 -- # break 00:26:02.079 17:04:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:02.079 17:04:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:02.079 17:04:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:02.079 1+0 records in 00:26:02.079 1+0 records out 00:26:02.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411989 s, 9.9 MB/s 00:26:02.079 17:04:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:02.079 17:04:50 -- common/autotest_common.sh@884 -- # size=4096 00:26:02.079 17:04:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:02.079 17:04:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:02.079 17:04:50 -- common/autotest_common.sh@887 -- # return 0 00:26:02.079 17:04:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:02.079 17:04:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:02.079 17:04:50 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:26:02.079 17:04:50 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:26:02.079 17:04:50 -- bdev/bdev_raid.sh@582 -- # echo 192 00:26:02.079 17:04:50 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:26:02.645 496+0 records in 00:26:02.645 496+0 records out 00:26:02.645 97517568 bytes (98 MB, 93 MiB) copied, 0.538226 s, 181 MB/s 00:26:02.645 17:04:51 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:02.645 17:04:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:02.645 17:04:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:02.645 17:04:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:02.645 17:04:51 -- bdev/nbd_common.sh@51 -- # local i 00:26:02.645 17:04:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:02.645 17:04:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:02.903 17:04:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:02.903 17:04:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:02.903 17:04:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:02.903 17:04:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:02.903 17:04:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:02.904 17:04:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:02.904 [2024-11-05 17:04:51.689099] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:02.904 17:04:51 -- bdev/nbd_common.sh@41 -- # break 00:26:02.904 17:04:51 -- bdev/nbd_common.sh@45 -- # return 0 00:26:02.904 17:04:51 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:03.162 [2024-11-05 17:04:51.852024] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:03.162 17:04:51 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:03.162 17:04:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:03.162 17:04:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:03.162 17:04:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:03.162 17:04:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:03.162 17:04:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:03.162 17:04:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:03.162 17:04:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:03.162 17:04:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:03.162 17:04:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:03.162 17:04:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.162 17:04:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.421 17:04:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:03.421 "name": "raid_bdev1", 00:26:03.421 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:03.421 "strip_size_kb": 64, 00:26:03.421 "state": "online", 00:26:03.421 "raid_level": "raid5f", 00:26:03.421 "superblock": true, 00:26:03.421 "num_base_bdevs": 4, 00:26:03.421 "num_base_bdevs_discovered": 3, 00:26:03.421 "num_base_bdevs_operational": 3, 00:26:03.421 "base_bdevs_list": [ 00:26:03.421 { 00:26:03.421 "name": null, 00:26:03.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.421 "is_configured": false, 00:26:03.421 "data_offset": 2048, 00:26:03.421 "data_size": 63488 00:26:03.421 }, 00:26:03.421 { 00:26:03.421 "name": "BaseBdev2", 00:26:03.421 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:03.421 "is_configured": true, 00:26:03.421 "data_offset": 2048, 00:26:03.421 "data_size": 63488 00:26:03.421 }, 00:26:03.421 { 00:26:03.421 "name": "BaseBdev3", 00:26:03.421 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:03.421 "is_configured": true, 00:26:03.421 "data_offset": 2048, 00:26:03.421 "data_size": 63488 00:26:03.421 }, 00:26:03.421 { 00:26:03.421 "name": "BaseBdev4", 00:26:03.421 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:03.421 "is_configured": true, 00:26:03.421 "data_offset": 2048, 00:26:03.421 "data_size": 63488 00:26:03.421 } 00:26:03.421 ] 00:26:03.421 }' 00:26:03.421 17:04:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:03.421 17:04:52 -- common/autotest_common.sh@10 -- # set +x 00:26:03.988 17:04:52 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:04.246 [2024-11-05 17:04:52.920201] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:04.246 [2024-11-05 17:04:52.920376] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:04.246 [2024-11-05 17:04:52.930618] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a710 00:26:04.246 [2024-11-05 17:04:52.937524] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:04.246 17:04:52 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:26:05.181 17:04:53 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:05.181 17:04:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:05.181 17:04:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:05.181 17:04:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:05.181 17:04:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:05.181 17:04:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.181 17:04:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.439 17:04:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:05.439 "name": "raid_bdev1", 00:26:05.439 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:05.439 "strip_size_kb": 64, 00:26:05.439 "state": "online", 00:26:05.439 "raid_level": "raid5f", 00:26:05.439 "superblock": true, 00:26:05.439 "num_base_bdevs": 4, 00:26:05.439 "num_base_bdevs_discovered": 4, 00:26:05.439 "num_base_bdevs_operational": 4, 00:26:05.439 "process": { 00:26:05.439 "type": "rebuild", 00:26:05.439 "target": "spare", 00:26:05.439 "progress": { 00:26:05.439 "blocks": 21120, 00:26:05.439 "percent": 11 00:26:05.439 } 00:26:05.439 }, 00:26:05.439 "base_bdevs_list": [ 00:26:05.439 { 00:26:05.439 "name": "spare", 00:26:05.439 "uuid": "d42315bb-72e8-5ba6-a01a-a7f86c400ceb", 00:26:05.439 "is_configured": true, 00:26:05.439 "data_offset": 2048, 00:26:05.439 "data_size": 63488 00:26:05.439 }, 00:26:05.439 { 00:26:05.439 "name": "BaseBdev2", 00:26:05.439 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:05.439 "is_configured": true, 00:26:05.439 "data_offset": 2048, 00:26:05.439 "data_size": 63488 00:26:05.439 }, 00:26:05.439 { 00:26:05.439 "name": "BaseBdev3", 00:26:05.439 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:05.439 "is_configured": true, 00:26:05.439 "data_offset": 2048, 00:26:05.439 "data_size": 63488 00:26:05.439 }, 00:26:05.439 { 00:26:05.439 "name": "BaseBdev4", 00:26:05.439 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:05.439 "is_configured": true, 00:26:05.439 "data_offset": 2048, 00:26:05.439 "data_size": 63488 00:26:05.439 } 00:26:05.439 ] 00:26:05.439 }' 00:26:05.439 17:04:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:05.439 17:04:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:05.439 17:04:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:05.439 17:04:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:05.439 17:04:54 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:05.697 [2024-11-05 17:04:54.462900] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:05.698 [2024-11-05 17:04:54.548182] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:05.698 [2024-11-05 17:04:54.548387] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:05.698 17:04:54 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:05.698 17:04:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:05.698 17:04:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:05.698 17:04:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:05.698 17:04:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:05.698 17:04:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:05.698 17:04:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:05.698 17:04:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:05.698 17:04:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:05.698 17:04:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:05.698 17:04:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.698 17:04:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.956 17:04:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:05.956 "name": "raid_bdev1", 00:26:05.956 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:05.956 "strip_size_kb": 64, 00:26:05.956 "state": "online", 00:26:05.956 "raid_level": "raid5f", 00:26:05.956 "superblock": true, 00:26:05.956 "num_base_bdevs": 4, 00:26:05.956 "num_base_bdevs_discovered": 3, 00:26:05.956 "num_base_bdevs_operational": 3, 00:26:05.956 "base_bdevs_list": [ 00:26:05.956 { 00:26:05.956 "name": null, 00:26:05.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.956 "is_configured": false, 00:26:05.956 "data_offset": 2048, 00:26:05.956 "data_size": 63488 00:26:05.956 }, 00:26:05.956 { 00:26:05.956 "name": "BaseBdev2", 00:26:05.956 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:05.956 "is_configured": true, 00:26:05.956 "data_offset": 2048, 00:26:05.956 "data_size": 63488 00:26:05.956 }, 00:26:05.956 { 00:26:05.956 "name": "BaseBdev3", 00:26:05.956 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:05.956 "is_configured": true, 00:26:05.956 "data_offset": 2048, 00:26:05.956 "data_size": 63488 00:26:05.956 }, 00:26:05.956 { 00:26:05.956 "name": "BaseBdev4", 00:26:05.956 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:05.956 "is_configured": true, 00:26:05.956 "data_offset": 2048, 00:26:05.957 "data_size": 63488 00:26:05.957 } 00:26:05.957 ] 00:26:05.957 }' 00:26:05.957 17:04:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:05.957 17:04:54 -- common/autotest_common.sh@10 -- # set +x 00:26:06.892 17:04:55 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:06.892 17:04:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:06.892 17:04:55 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:06.892 17:04:55 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:06.892 17:04:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:06.892 17:04:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.892 17:04:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.892 17:04:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:06.892 "name": "raid_bdev1", 00:26:06.892 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:06.892 "strip_size_kb": 64, 00:26:06.892 "state": "online", 00:26:06.892 "raid_level": "raid5f", 00:26:06.892 "superblock": true, 00:26:06.892 "num_base_bdevs": 4, 00:26:06.892 "num_base_bdevs_discovered": 3, 00:26:06.892 "num_base_bdevs_operational": 3, 00:26:06.892 "base_bdevs_list": [ 00:26:06.892 { 00:26:06.892 "name": null, 00:26:06.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.892 "is_configured": false, 00:26:06.892 "data_offset": 2048, 00:26:06.892 "data_size": 63488 00:26:06.892 }, 00:26:06.892 { 00:26:06.892 "name": "BaseBdev2", 00:26:06.892 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:06.892 "is_configured": true, 00:26:06.892 "data_offset": 2048, 00:26:06.892 "data_size": 63488 00:26:06.892 }, 00:26:06.892 { 00:26:06.892 "name": "BaseBdev3", 00:26:06.892 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:06.892 "is_configured": true, 00:26:06.892 "data_offset": 2048, 00:26:06.892 "data_size": 63488 00:26:06.892 }, 00:26:06.892 { 00:26:06.892 "name": "BaseBdev4", 00:26:06.892 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:06.892 "is_configured": true, 00:26:06.892 "data_offset": 2048, 00:26:06.892 "data_size": 63488 00:26:06.892 } 00:26:06.892 ] 00:26:06.892 }' 00:26:06.892 17:04:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:06.892 17:04:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:06.892 17:04:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:06.892 17:04:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:06.892 17:04:55 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:07.150 [2024-11-05 17:04:56.025944] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:07.150 [2024-11-05 17:04:56.026426] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:07.150 [2024-11-05 17:04:56.036022] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:26:07.150 [2024-11-05 17:04:56.042898] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:07.409 17:04:56 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:26:08.347 17:04:57 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:08.347 17:04:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:08.347 17:04:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:08.347 17:04:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:08.347 17:04:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:08.347 17:04:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.347 17:04:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:08.606 "name": "raid_bdev1", 00:26:08.606 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:08.606 "strip_size_kb": 64, 00:26:08.606 "state": "online", 00:26:08.606 "raid_level": "raid5f", 00:26:08.606 "superblock": true, 00:26:08.606 "num_base_bdevs": 4, 00:26:08.606 "num_base_bdevs_discovered": 4, 00:26:08.606 "num_base_bdevs_operational": 4, 00:26:08.606 "process": { 00:26:08.606 "type": "rebuild", 00:26:08.606 "target": "spare", 00:26:08.606 "progress": { 00:26:08.606 "blocks": 23040, 00:26:08.606 "percent": 12 00:26:08.606 } 00:26:08.606 }, 00:26:08.606 "base_bdevs_list": [ 00:26:08.606 { 00:26:08.606 "name": "spare", 00:26:08.606 "uuid": "d42315bb-72e8-5ba6-a01a-a7f86c400ceb", 00:26:08.606 "is_configured": true, 00:26:08.606 "data_offset": 2048, 00:26:08.606 "data_size": 63488 00:26:08.606 }, 00:26:08.606 { 00:26:08.606 "name": "BaseBdev2", 00:26:08.606 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:08.606 "is_configured": true, 00:26:08.606 "data_offset": 2048, 00:26:08.606 "data_size": 63488 00:26:08.606 }, 00:26:08.606 { 00:26:08.606 "name": "BaseBdev3", 00:26:08.606 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:08.606 "is_configured": true, 00:26:08.606 "data_offset": 2048, 00:26:08.606 "data_size": 63488 00:26:08.606 }, 00:26:08.606 { 00:26:08.606 "name": "BaseBdev4", 00:26:08.606 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:08.606 "is_configured": true, 00:26:08.606 "data_offset": 2048, 00:26:08.606 "data_size": 63488 00:26:08.606 } 00:26:08.606 ] 00:26:08.606 }' 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:26:08.606 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@657 -- # local timeout=737 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.606 17:04:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.864 17:04:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:08.864 "name": "raid_bdev1", 00:26:08.864 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:08.864 "strip_size_kb": 64, 00:26:08.864 "state": "online", 00:26:08.864 "raid_level": "raid5f", 00:26:08.864 "superblock": true, 00:26:08.864 "num_base_bdevs": 4, 00:26:08.864 "num_base_bdevs_discovered": 4, 00:26:08.864 "num_base_bdevs_operational": 4, 00:26:08.864 "process": { 00:26:08.864 "type": "rebuild", 00:26:08.864 "target": "spare", 00:26:08.864 "progress": { 00:26:08.864 "blocks": 28800, 00:26:08.864 "percent": 15 00:26:08.864 } 00:26:08.864 }, 00:26:08.864 "base_bdevs_list": [ 00:26:08.864 { 00:26:08.864 "name": "spare", 00:26:08.864 "uuid": "d42315bb-72e8-5ba6-a01a-a7f86c400ceb", 00:26:08.864 "is_configured": true, 00:26:08.864 "data_offset": 2048, 00:26:08.864 "data_size": 63488 00:26:08.864 }, 00:26:08.864 { 00:26:08.864 "name": "BaseBdev2", 00:26:08.864 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:08.864 "is_configured": true, 00:26:08.864 "data_offset": 2048, 00:26:08.864 "data_size": 63488 00:26:08.864 }, 00:26:08.864 { 00:26:08.864 "name": "BaseBdev3", 00:26:08.864 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:08.864 "is_configured": true, 00:26:08.864 "data_offset": 2048, 00:26:08.864 "data_size": 63488 00:26:08.864 }, 00:26:08.864 { 00:26:08.864 "name": "BaseBdev4", 00:26:08.864 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:08.864 "is_configured": true, 00:26:08.864 "data_offset": 2048, 00:26:08.864 "data_size": 63488 00:26:08.864 } 00:26:08.864 ] 00:26:08.864 }' 00:26:08.864 17:04:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:08.864 17:04:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:08.864 17:04:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:08.864 17:04:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:08.864 17:04:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:10.239 17:04:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:10.239 17:04:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:10.239 17:04:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:10.239 17:04:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:10.239 17:04:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:10.239 17:04:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:10.239 17:04:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.239 17:04:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:10.239 17:04:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:10.239 "name": "raid_bdev1", 00:26:10.239 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:10.239 "strip_size_kb": 64, 00:26:10.239 "state": "online", 00:26:10.239 "raid_level": "raid5f", 00:26:10.239 "superblock": true, 00:26:10.239 "num_base_bdevs": 4, 00:26:10.239 "num_base_bdevs_discovered": 4, 00:26:10.239 "num_base_bdevs_operational": 4, 00:26:10.239 "process": { 00:26:10.239 "type": "rebuild", 00:26:10.239 "target": "spare", 00:26:10.239 "progress": { 00:26:10.239 "blocks": 53760, 00:26:10.239 "percent": 28 00:26:10.239 } 00:26:10.239 }, 00:26:10.239 "base_bdevs_list": [ 00:26:10.239 { 00:26:10.239 "name": "spare", 00:26:10.239 "uuid": "d42315bb-72e8-5ba6-a01a-a7f86c400ceb", 00:26:10.239 "is_configured": true, 00:26:10.239 "data_offset": 2048, 00:26:10.239 "data_size": 63488 00:26:10.239 }, 00:26:10.239 { 00:26:10.239 "name": "BaseBdev2", 00:26:10.239 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:10.239 "is_configured": true, 00:26:10.239 "data_offset": 2048, 00:26:10.239 "data_size": 63488 00:26:10.239 }, 00:26:10.239 { 00:26:10.239 "name": "BaseBdev3", 00:26:10.239 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:10.239 "is_configured": true, 00:26:10.239 "data_offset": 2048, 00:26:10.239 "data_size": 63488 00:26:10.239 }, 00:26:10.239 { 00:26:10.239 "name": "BaseBdev4", 00:26:10.239 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:10.239 "is_configured": true, 00:26:10.239 "data_offset": 2048, 00:26:10.239 "data_size": 63488 00:26:10.239 } 00:26:10.239 ] 00:26:10.239 }' 00:26:10.239 17:04:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:10.239 17:04:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:10.239 17:04:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:10.239 17:04:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:10.239 17:04:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:11.174 17:05:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:11.174 17:05:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:11.174 17:05:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:11.174 17:05:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:11.174 17:05:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:11.174 17:05:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:11.174 17:05:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.174 17:05:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.433 17:05:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:11.433 "name": "raid_bdev1", 00:26:11.433 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:11.433 "strip_size_kb": 64, 00:26:11.433 "state": "online", 00:26:11.433 "raid_level": "raid5f", 00:26:11.433 "superblock": true, 00:26:11.433 "num_base_bdevs": 4, 00:26:11.433 "num_base_bdevs_discovered": 4, 00:26:11.433 "num_base_bdevs_operational": 4, 00:26:11.433 "process": { 00:26:11.433 "type": "rebuild", 00:26:11.433 "target": "spare", 00:26:11.433 "progress": { 00:26:11.433 "blocks": 78720, 00:26:11.433 "percent": 41 00:26:11.433 } 00:26:11.433 }, 00:26:11.433 "base_bdevs_list": [ 00:26:11.433 { 00:26:11.433 "name": "spare", 00:26:11.433 "uuid": "d42315bb-72e8-5ba6-a01a-a7f86c400ceb", 00:26:11.433 "is_configured": true, 00:26:11.433 "data_offset": 2048, 00:26:11.433 "data_size": 63488 00:26:11.433 }, 00:26:11.433 { 00:26:11.433 "name": "BaseBdev2", 00:26:11.433 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:11.433 "is_configured": true, 00:26:11.433 "data_offset": 2048, 00:26:11.433 "data_size": 63488 00:26:11.433 }, 00:26:11.433 { 00:26:11.433 "name": "BaseBdev3", 00:26:11.433 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:11.433 "is_configured": true, 00:26:11.433 "data_offset": 2048, 00:26:11.433 "data_size": 63488 00:26:11.433 }, 00:26:11.433 { 00:26:11.433 "name": "BaseBdev4", 00:26:11.433 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:11.433 "is_configured": true, 00:26:11.433 "data_offset": 2048, 00:26:11.433 "data_size": 63488 00:26:11.433 } 00:26:11.433 ] 00:26:11.433 }' 00:26:11.433 17:05:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:11.433 17:05:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:11.691 17:05:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:11.691 17:05:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:11.691 17:05:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:12.627 17:05:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:12.627 17:05:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:12.627 17:05:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:12.627 17:05:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:12.627 17:05:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:12.627 17:05:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:12.627 17:05:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.627 17:05:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:12.885 17:05:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:12.885 "name": "raid_bdev1", 00:26:12.885 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:12.885 "strip_size_kb": 64, 00:26:12.885 "state": "online", 00:26:12.885 "raid_level": "raid5f", 00:26:12.885 "superblock": true, 00:26:12.885 "num_base_bdevs": 4, 00:26:12.885 "num_base_bdevs_discovered": 4, 00:26:12.885 "num_base_bdevs_operational": 4, 00:26:12.885 "process": { 00:26:12.885 "type": "rebuild", 00:26:12.885 "target": "spare", 00:26:12.885 "progress": { 00:26:12.885 "blocks": 103680, 00:26:12.885 "percent": 54 00:26:12.885 } 00:26:12.885 }, 00:26:12.885 "base_bdevs_list": [ 00:26:12.885 { 00:26:12.885 "name": "spare", 00:26:12.885 "uuid": "d42315bb-72e8-5ba6-a01a-a7f86c400ceb", 00:26:12.885 "is_configured": true, 00:26:12.885 "data_offset": 2048, 00:26:12.885 "data_size": 63488 00:26:12.885 }, 00:26:12.885 { 00:26:12.885 "name": "BaseBdev2", 00:26:12.885 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:12.885 "is_configured": true, 00:26:12.885 "data_offset": 2048, 00:26:12.885 "data_size": 63488 00:26:12.885 }, 00:26:12.885 { 00:26:12.885 "name": "BaseBdev3", 00:26:12.885 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:12.885 "is_configured": true, 00:26:12.885 "data_offset": 2048, 00:26:12.885 "data_size": 63488 00:26:12.885 }, 00:26:12.885 { 00:26:12.885 "name": "BaseBdev4", 00:26:12.885 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:12.885 "is_configured": true, 00:26:12.885 "data_offset": 2048, 00:26:12.885 "data_size": 63488 00:26:12.885 } 00:26:12.885 ] 00:26:12.885 }' 00:26:12.885 17:05:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:12.885 17:05:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:12.885 17:05:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:12.885 17:05:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:12.885 17:05:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:13.820 17:05:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:13.820 17:05:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:13.820 17:05:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:13.820 17:05:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:13.820 17:05:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:13.820 17:05:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:13.820 17:05:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.820 17:05:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.078 17:05:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:14.078 "name": "raid_bdev1", 00:26:14.078 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:14.078 "strip_size_kb": 64, 00:26:14.078 "state": "online", 00:26:14.078 "raid_level": "raid5f", 00:26:14.078 "superblock": true, 00:26:14.078 "num_base_bdevs": 4, 00:26:14.078 "num_base_bdevs_discovered": 4, 00:26:14.078 "num_base_bdevs_operational": 4, 00:26:14.078 "process": { 00:26:14.078 "type": "rebuild", 00:26:14.078 "target": "spare", 00:26:14.078 "progress": { 00:26:14.078 "blocks": 130560, 00:26:14.078 "percent": 68 00:26:14.078 } 00:26:14.078 }, 00:26:14.078 "base_bdevs_list": [ 00:26:14.078 { 00:26:14.078 "name": "spare", 00:26:14.078 "uuid": "d42315bb-72e8-5ba6-a01a-a7f86c400ceb", 00:26:14.078 "is_configured": true, 00:26:14.078 "data_offset": 2048, 00:26:14.078 "data_size": 63488 00:26:14.078 }, 00:26:14.078 { 00:26:14.078 "name": "BaseBdev2", 00:26:14.078 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:14.078 "is_configured": true, 00:26:14.078 "data_offset": 2048, 00:26:14.078 "data_size": 63488 00:26:14.078 }, 00:26:14.078 { 00:26:14.078 "name": "BaseBdev3", 00:26:14.078 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:14.078 "is_configured": true, 00:26:14.078 "data_offset": 2048, 00:26:14.078 "data_size": 63488 00:26:14.078 }, 00:26:14.078 { 00:26:14.078 "name": "BaseBdev4", 00:26:14.078 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:14.078 "is_configured": true, 00:26:14.078 "data_offset": 2048, 00:26:14.078 "data_size": 63488 00:26:14.078 } 00:26:14.078 ] 00:26:14.078 }' 00:26:14.078 17:05:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:14.078 17:05:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:14.078 17:05:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:14.336 17:05:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:14.336 17:05:03 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:15.298 17:05:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:15.298 17:05:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:15.298 17:05:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:15.298 17:05:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:15.298 17:05:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:15.298 17:05:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:15.299 17:05:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.299 17:05:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.561 17:05:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:15.561 "name": "raid_bdev1", 00:26:15.561 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:15.561 "strip_size_kb": 64, 00:26:15.561 "state": "online", 00:26:15.561 "raid_level": "raid5f", 00:26:15.561 "superblock": true, 00:26:15.561 "num_base_bdevs": 4, 00:26:15.561 "num_base_bdevs_discovered": 4, 00:26:15.561 "num_base_bdevs_operational": 4, 00:26:15.561 "process": { 00:26:15.561 "type": "rebuild", 00:26:15.561 "target": "spare", 00:26:15.561 "progress": { 00:26:15.561 "blocks": 155520, 00:26:15.561 "percent": 81 00:26:15.561 } 00:26:15.561 }, 00:26:15.561 "base_bdevs_list": [ 00:26:15.561 { 00:26:15.561 "name": "spare", 00:26:15.561 "uuid": "d42315bb-72e8-5ba6-a01a-a7f86c400ceb", 00:26:15.561 "is_configured": true, 00:26:15.561 "data_offset": 2048, 00:26:15.561 "data_size": 63488 00:26:15.561 }, 00:26:15.561 { 00:26:15.561 "name": "BaseBdev2", 00:26:15.561 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:15.561 "is_configured": true, 00:26:15.561 "data_offset": 2048, 00:26:15.561 "data_size": 63488 00:26:15.561 }, 00:26:15.561 { 00:26:15.561 "name": "BaseBdev3", 00:26:15.561 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:15.561 "is_configured": true, 00:26:15.561 "data_offset": 2048, 00:26:15.561 "data_size": 63488 00:26:15.561 }, 00:26:15.561 { 00:26:15.561 "name": "BaseBdev4", 00:26:15.561 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:15.561 "is_configured": true, 00:26:15.561 "data_offset": 2048, 00:26:15.561 "data_size": 63488 00:26:15.561 } 00:26:15.561 ] 00:26:15.561 }' 00:26:15.561 17:05:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:15.561 17:05:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:15.561 17:05:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:15.561 17:05:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:15.561 17:05:04 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:16.496 17:05:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:16.496 17:05:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:16.496 17:05:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:16.496 17:05:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:16.496 17:05:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:16.496 17:05:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:16.496 17:05:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.496 17:05:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.754 17:05:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:16.754 "name": "raid_bdev1", 00:26:16.754 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:16.754 "strip_size_kb": 64, 00:26:16.754 "state": "online", 00:26:16.754 "raid_level": "raid5f", 00:26:16.754 "superblock": true, 00:26:16.754 "num_base_bdevs": 4, 00:26:16.754 "num_base_bdevs_discovered": 4, 00:26:16.754 "num_base_bdevs_operational": 4, 00:26:16.754 "process": { 00:26:16.754 "type": "rebuild", 00:26:16.754 "target": "spare", 00:26:16.754 "progress": { 00:26:16.754 "blocks": 180480, 00:26:16.754 "percent": 94 00:26:16.754 } 00:26:16.754 }, 00:26:16.754 "base_bdevs_list": [ 00:26:16.754 { 00:26:16.754 "name": "spare", 00:26:16.754 "uuid": "d42315bb-72e8-5ba6-a01a-a7f86c400ceb", 00:26:16.754 "is_configured": true, 00:26:16.754 "data_offset": 2048, 00:26:16.754 "data_size": 63488 00:26:16.754 }, 00:26:16.754 { 00:26:16.754 "name": "BaseBdev2", 00:26:16.754 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:16.754 "is_configured": true, 00:26:16.754 "data_offset": 2048, 00:26:16.754 "data_size": 63488 00:26:16.754 }, 00:26:16.754 { 00:26:16.754 "name": "BaseBdev3", 00:26:16.754 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:16.754 "is_configured": true, 00:26:16.754 "data_offset": 2048, 00:26:16.754 "data_size": 63488 00:26:16.754 }, 00:26:16.754 { 00:26:16.754 "name": "BaseBdev4", 00:26:16.754 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:16.754 "is_configured": true, 00:26:16.754 "data_offset": 2048, 00:26:16.754 "data_size": 63488 00:26:16.755 } 00:26:16.755 ] 00:26:16.755 }' 00:26:16.755 17:05:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:16.755 17:05:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:16.755 17:05:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:17.013 17:05:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:17.013 17:05:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:17.271 [2024-11-05 17:05:06.106508] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:17.271 [2024-11-05 17:05:06.106708] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:17.271 [2024-11-05 17:05:06.107011] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:17.837 17:05:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:17.837 17:05:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:17.837 17:05:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:17.837 17:05:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:17.837 17:05:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:17.837 17:05:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:17.837 17:05:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.837 17:05:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.095 17:05:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:18.095 "name": "raid_bdev1", 00:26:18.095 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:18.095 "strip_size_kb": 64, 00:26:18.095 "state": "online", 00:26:18.095 "raid_level": "raid5f", 00:26:18.095 "superblock": true, 00:26:18.095 "num_base_bdevs": 4, 00:26:18.095 "num_base_bdevs_discovered": 4, 00:26:18.095 "num_base_bdevs_operational": 4, 00:26:18.095 "base_bdevs_list": [ 00:26:18.095 { 00:26:18.095 "name": "spare", 00:26:18.095 "uuid": "d42315bb-72e8-5ba6-a01a-a7f86c400ceb", 00:26:18.095 "is_configured": true, 00:26:18.095 "data_offset": 2048, 00:26:18.095 "data_size": 63488 00:26:18.095 }, 00:26:18.095 { 00:26:18.095 "name": "BaseBdev2", 00:26:18.095 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:18.095 "is_configured": true, 00:26:18.095 "data_offset": 2048, 00:26:18.095 "data_size": 63488 00:26:18.095 }, 00:26:18.095 { 00:26:18.095 "name": "BaseBdev3", 00:26:18.095 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:18.095 "is_configured": true, 00:26:18.095 "data_offset": 2048, 00:26:18.095 "data_size": 63488 00:26:18.095 }, 00:26:18.095 { 00:26:18.095 "name": "BaseBdev4", 00:26:18.095 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:18.095 "is_configured": true, 00:26:18.095 "data_offset": 2048, 00:26:18.095 "data_size": 63488 00:26:18.095 } 00:26:18.095 ] 00:26:18.095 }' 00:26:18.095 17:05:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:18.354 17:05:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:18.354 17:05:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:18.354 17:05:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:26:18.354 17:05:07 -- bdev/bdev_raid.sh@660 -- # break 00:26:18.354 17:05:07 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:18.354 17:05:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:18.354 17:05:07 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:18.354 17:05:07 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:18.354 17:05:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:18.354 17:05:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.354 17:05:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.612 17:05:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:18.612 "name": "raid_bdev1", 00:26:18.612 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:18.612 "strip_size_kb": 64, 00:26:18.612 "state": "online", 00:26:18.612 "raid_level": "raid5f", 00:26:18.612 "superblock": true, 00:26:18.612 "num_base_bdevs": 4, 00:26:18.612 "num_base_bdevs_discovered": 4, 00:26:18.612 "num_base_bdevs_operational": 4, 00:26:18.612 "base_bdevs_list": [ 00:26:18.612 { 00:26:18.612 "name": "spare", 00:26:18.612 "uuid": "d42315bb-72e8-5ba6-a01a-a7f86c400ceb", 00:26:18.612 "is_configured": true, 00:26:18.612 "data_offset": 2048, 00:26:18.612 "data_size": 63488 00:26:18.612 }, 00:26:18.612 { 00:26:18.612 "name": "BaseBdev2", 00:26:18.612 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:18.612 "is_configured": true, 00:26:18.612 "data_offset": 2048, 00:26:18.612 "data_size": 63488 00:26:18.612 }, 00:26:18.612 { 00:26:18.612 "name": "BaseBdev3", 00:26:18.612 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:18.612 "is_configured": true, 00:26:18.612 "data_offset": 2048, 00:26:18.612 "data_size": 63488 00:26:18.612 }, 00:26:18.612 { 00:26:18.612 "name": "BaseBdev4", 00:26:18.612 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:18.612 "is_configured": true, 00:26:18.612 "data_offset": 2048, 00:26:18.612 "data_size": 63488 00:26:18.612 } 00:26:18.612 ] 00:26:18.612 }' 00:26:18.612 17:05:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:18.612 17:05:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:18.612 17:05:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:18.612 17:05:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:18.612 17:05:07 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:18.612 17:05:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:18.612 17:05:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:18.612 17:05:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:18.612 17:05:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:18.612 17:05:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:18.612 17:05:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:18.612 17:05:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:18.612 17:05:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:18.612 17:05:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:18.612 17:05:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.612 17:05:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.870 17:05:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:18.870 "name": "raid_bdev1", 00:26:18.870 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:18.870 "strip_size_kb": 64, 00:26:18.870 "state": "online", 00:26:18.870 "raid_level": "raid5f", 00:26:18.870 "superblock": true, 00:26:18.870 "num_base_bdevs": 4, 00:26:18.870 "num_base_bdevs_discovered": 4, 00:26:18.870 "num_base_bdevs_operational": 4, 00:26:18.870 "base_bdevs_list": [ 00:26:18.870 { 00:26:18.870 "name": "spare", 00:26:18.870 "uuid": "d42315bb-72e8-5ba6-a01a-a7f86c400ceb", 00:26:18.870 "is_configured": true, 00:26:18.870 "data_offset": 2048, 00:26:18.870 "data_size": 63488 00:26:18.870 }, 00:26:18.870 { 00:26:18.870 "name": "BaseBdev2", 00:26:18.870 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:18.870 "is_configured": true, 00:26:18.870 "data_offset": 2048, 00:26:18.870 "data_size": 63488 00:26:18.870 }, 00:26:18.870 { 00:26:18.870 "name": "BaseBdev3", 00:26:18.870 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:18.870 "is_configured": true, 00:26:18.870 "data_offset": 2048, 00:26:18.870 "data_size": 63488 00:26:18.870 }, 00:26:18.870 { 00:26:18.870 "name": "BaseBdev4", 00:26:18.870 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:18.870 "is_configured": true, 00:26:18.870 "data_offset": 2048, 00:26:18.870 "data_size": 63488 00:26:18.870 } 00:26:18.870 ] 00:26:18.870 }' 00:26:18.870 17:05:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:18.870 17:05:07 -- common/autotest_common.sh@10 -- # set +x 00:26:19.438 17:05:08 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:19.696 [2024-11-05 17:05:08.410901] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:19.696 [2024-11-05 17:05:08.411103] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:19.696 [2024-11-05 17:05:08.411341] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:19.696 [2024-11-05 17:05:08.411566] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:19.696 [2024-11-05 17:05:08.411687] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:26:19.696 17:05:08 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.696 17:05:08 -- bdev/bdev_raid.sh@671 -- # jq length 00:26:19.955 17:05:08 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:26:19.955 17:05:08 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:26:19.955 17:05:08 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:19.955 17:05:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:19.955 17:05:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:19.955 17:05:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:19.955 17:05:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:19.955 17:05:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:19.955 17:05:08 -- bdev/nbd_common.sh@12 -- # local i 00:26:19.955 17:05:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:19.955 17:05:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:19.955 17:05:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:20.214 /dev/nbd0 00:26:20.214 17:05:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:20.214 17:05:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:20.214 17:05:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:20.214 17:05:08 -- common/autotest_common.sh@867 -- # local i 00:26:20.214 17:05:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:20.214 17:05:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:20.214 17:05:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:20.214 17:05:08 -- common/autotest_common.sh@871 -- # break 00:26:20.214 17:05:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:20.214 17:05:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:20.214 17:05:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:20.214 1+0 records in 00:26:20.214 1+0 records out 00:26:20.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607183 s, 6.7 MB/s 00:26:20.214 17:05:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:20.214 17:05:08 -- common/autotest_common.sh@884 -- # size=4096 00:26:20.214 17:05:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:20.214 17:05:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:20.214 17:05:08 -- common/autotest_common.sh@887 -- # return 0 00:26:20.214 17:05:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:20.214 17:05:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:20.214 17:05:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:20.473 /dev/nbd1 00:26:20.473 17:05:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:20.473 17:05:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:20.473 17:05:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:20.473 17:05:09 -- common/autotest_common.sh@867 -- # local i 00:26:20.473 17:05:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:20.473 17:05:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:20.473 17:05:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:20.473 17:05:09 -- common/autotest_common.sh@871 -- # break 00:26:20.473 17:05:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:20.473 17:05:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:20.473 17:05:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:20.473 1+0 records in 00:26:20.473 1+0 records out 00:26:20.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030942 s, 13.2 MB/s 00:26:20.473 17:05:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:20.473 17:05:09 -- common/autotest_common.sh@884 -- # size=4096 00:26:20.473 17:05:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:20.473 17:05:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:20.473 17:05:09 -- common/autotest_common.sh@887 -- # return 0 00:26:20.473 17:05:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:20.473 17:05:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:20.473 17:05:09 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:20.732 17:05:09 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:20.732 17:05:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:20.732 17:05:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:20.732 17:05:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:20.732 17:05:09 -- bdev/nbd_common.sh@51 -- # local i 00:26:20.732 17:05:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:20.732 17:05:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:20.732 17:05:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:20.732 17:05:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:20.732 17:05:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:20.732 17:05:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:20.732 17:05:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:20.732 17:05:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:20.732 17:05:09 -- bdev/nbd_common.sh@41 -- # break 00:26:20.732 17:05:09 -- bdev/nbd_common.sh@45 -- # return 0 00:26:20.732 17:05:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:20.732 17:05:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:20.991 17:05:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:20.991 17:05:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:20.991 17:05:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:20.991 17:05:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:20.991 17:05:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:20.991 17:05:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:20.991 17:05:09 -- bdev/nbd_common.sh@41 -- # break 00:26:20.991 17:05:09 -- bdev/nbd_common.sh@45 -- # return 0 00:26:20.991 17:05:09 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:26:20.991 17:05:09 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:20.991 17:05:09 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:26:20.991 17:05:09 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:21.249 17:05:10 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:21.507 [2024-11-05 17:05:10.347036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:21.507 [2024-11-05 17:05:10.347283] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:21.507 [2024-11-05 17:05:10.347363] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:21.507 [2024-11-05 17:05:10.347609] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:21.507 [2024-11-05 17:05:10.349823] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:21.507 [2024-11-05 17:05:10.350018] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:21.507 [2024-11-05 17:05:10.350225] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:21.507 [2024-11-05 17:05:10.350428] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:21.507 BaseBdev1 00:26:21.507 17:05:10 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:21.507 17:05:10 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:26:21.507 17:05:10 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:26:21.765 17:05:10 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:22.023 [2024-11-05 17:05:10.795122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:22.023 [2024-11-05 17:05:10.795334] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.023 [2024-11-05 17:05:10.795406] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:26:22.023 [2024-11-05 17:05:10.795652] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.023 [2024-11-05 17:05:10.796131] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.023 [2024-11-05 17:05:10.796311] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:22.023 [2024-11-05 17:05:10.796504] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:26:22.023 [2024-11-05 17:05:10.796620] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:26:22.023 [2024-11-05 17:05:10.796719] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:22.023 [2024-11-05 17:05:10.796777] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:26:22.023 [2024-11-05 17:05:10.797035] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:22.023 BaseBdev2 00:26:22.023 17:05:10 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:22.023 17:05:10 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:26:22.023 17:05:10 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:26:22.280 17:05:11 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:22.539 [2024-11-05 17:05:11.211207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:22.539 [2024-11-05 17:05:11.211399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.539 [2024-11-05 17:05:11.211466] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:26:22.539 [2024-11-05 17:05:11.211715] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.539 [2024-11-05 17:05:11.212163] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.539 [2024-11-05 17:05:11.212363] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:22.539 [2024-11-05 17:05:11.212564] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:26:22.539 [2024-11-05 17:05:11.212690] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:22.539 BaseBdev3 00:26:22.539 17:05:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:22.539 17:05:11 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:26:22.539 17:05:11 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:26:22.539 17:05:11 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:22.797 [2024-11-05 17:05:11.643314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:22.797 [2024-11-05 17:05:11.643510] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.797 [2024-11-05 17:05:11.643578] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:26:22.797 [2024-11-05 17:05:11.643830] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.797 [2024-11-05 17:05:11.644277] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.797 [2024-11-05 17:05:11.644469] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:22.797 [2024-11-05 17:05:11.644675] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:26:22.797 [2024-11-05 17:05:11.644802] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:22.797 BaseBdev4 00:26:22.797 17:05:11 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:23.055 17:05:11 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:23.314 [2024-11-05 17:05:12.055449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:23.314 [2024-11-05 17:05:12.055631] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:23.314 [2024-11-05 17:05:12.055701] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:26:23.314 [2024-11-05 17:05:12.055978] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:23.314 [2024-11-05 17:05:12.056554] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:23.314 [2024-11-05 17:05:12.056765] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:23.314 [2024-11-05 17:05:12.056965] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:26:23.314 [2024-11-05 17:05:12.057111] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:23.314 spare 00:26:23.314 17:05:12 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:23.314 17:05:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:23.314 17:05:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:23.314 17:05:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:23.314 17:05:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:23.314 17:05:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:23.314 17:05:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:23.314 17:05:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:23.314 17:05:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:23.314 17:05:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:23.314 17:05:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.314 17:05:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:23.314 [2024-11-05 17:05:12.157325] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:26:23.314 [2024-11-05 17:05:12.157469] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:23.314 [2024-11-05 17:05:12.157620] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049510 00:26:23.314 [2024-11-05 17:05:12.162760] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:26:23.314 [2024-11-05 17:05:12.162898] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:26:23.314 [2024-11-05 17:05:12.163144] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:23.572 17:05:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:23.572 "name": "raid_bdev1", 00:26:23.572 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:23.572 "strip_size_kb": 64, 00:26:23.572 "state": "online", 00:26:23.572 "raid_level": "raid5f", 00:26:23.572 "superblock": true, 00:26:23.572 "num_base_bdevs": 4, 00:26:23.572 "num_base_bdevs_discovered": 4, 00:26:23.572 "num_base_bdevs_operational": 4, 00:26:23.572 "base_bdevs_list": [ 00:26:23.572 { 00:26:23.572 "name": "spare", 00:26:23.572 "uuid": "d42315bb-72e8-5ba6-a01a-a7f86c400ceb", 00:26:23.572 "is_configured": true, 00:26:23.572 "data_offset": 2048, 00:26:23.572 "data_size": 63488 00:26:23.572 }, 00:26:23.572 { 00:26:23.572 "name": "BaseBdev2", 00:26:23.572 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:23.572 "is_configured": true, 00:26:23.572 "data_offset": 2048, 00:26:23.572 "data_size": 63488 00:26:23.572 }, 00:26:23.572 { 00:26:23.572 "name": "BaseBdev3", 00:26:23.572 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:23.572 "is_configured": true, 00:26:23.572 "data_offset": 2048, 00:26:23.572 "data_size": 63488 00:26:23.572 }, 00:26:23.572 { 00:26:23.572 "name": "BaseBdev4", 00:26:23.572 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:23.572 "is_configured": true, 00:26:23.572 "data_offset": 2048, 00:26:23.572 "data_size": 63488 00:26:23.572 } 00:26:23.572 ] 00:26:23.572 }' 00:26:23.572 17:05:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:23.572 17:05:12 -- common/autotest_common.sh@10 -- # set +x 00:26:24.139 17:05:12 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:24.139 17:05:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:24.139 17:05:12 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:24.139 17:05:12 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:24.139 17:05:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:24.139 17:05:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.139 17:05:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.139 17:05:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:24.139 "name": "raid_bdev1", 00:26:24.139 "uuid": "bbf7feb3-87e3-499c-acca-3ac3683d1931", 00:26:24.139 "strip_size_kb": 64, 00:26:24.139 "state": "online", 00:26:24.139 "raid_level": "raid5f", 00:26:24.139 "superblock": true, 00:26:24.139 "num_base_bdevs": 4, 00:26:24.139 "num_base_bdevs_discovered": 4, 00:26:24.139 "num_base_bdevs_operational": 4, 00:26:24.139 "base_bdevs_list": [ 00:26:24.139 { 00:26:24.139 "name": "spare", 00:26:24.139 "uuid": "d42315bb-72e8-5ba6-a01a-a7f86c400ceb", 00:26:24.139 "is_configured": true, 00:26:24.139 "data_offset": 2048, 00:26:24.139 "data_size": 63488 00:26:24.139 }, 00:26:24.139 { 00:26:24.139 "name": "BaseBdev2", 00:26:24.139 "uuid": "0a894bf0-221e-5371-98cd-06ad6a51f1d2", 00:26:24.139 "is_configured": true, 00:26:24.139 "data_offset": 2048, 00:26:24.139 "data_size": 63488 00:26:24.139 }, 00:26:24.139 { 00:26:24.139 "name": "BaseBdev3", 00:26:24.139 "uuid": "a04a93f5-2fff-5fa1-9487-4d8d8070bd51", 00:26:24.139 "is_configured": true, 00:26:24.139 "data_offset": 2048, 00:26:24.139 "data_size": 63488 00:26:24.139 }, 00:26:24.139 { 00:26:24.139 "name": "BaseBdev4", 00:26:24.139 "uuid": "3c74b4b0-3387-5d32-bb61-0e99325dc9f1", 00:26:24.139 "is_configured": true, 00:26:24.139 "data_offset": 2048, 00:26:24.139 "data_size": 63488 00:26:24.139 } 00:26:24.139 ] 00:26:24.139 }' 00:26:24.139 17:05:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:24.397 17:05:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:24.397 17:05:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:24.397 17:05:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:24.397 17:05:13 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.397 17:05:13 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:24.655 17:05:13 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:26:24.656 17:05:13 -- bdev/bdev_raid.sh@709 -- # killprocess 131666 00:26:24.656 17:05:13 -- common/autotest_common.sh@936 -- # '[' -z 131666 ']' 00:26:24.656 17:05:13 -- common/autotest_common.sh@940 -- # kill -0 131666 00:26:24.656 17:05:13 -- common/autotest_common.sh@941 -- # uname 00:26:24.656 17:05:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:24.656 17:05:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131666 00:26:24.656 17:05:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:24.656 17:05:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:24.656 17:05:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131666' 00:26:24.656 killing process with pid 131666 00:26:24.656 17:05:13 -- common/autotest_common.sh@955 -- # kill 131666 00:26:24.656 Received shutdown signal, test time was about 60.000000 seconds 00:26:24.656 00:26:24.656 Latency(us) 00:26:24.656 [2024-11-05T17:05:13.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.656 [2024-11-05T17:05:13.533Z] =================================================================================================================== 00:26:24.656 [2024-11-05T17:05:13.533Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:24.656 17:05:13 -- common/autotest_common.sh@960 -- # wait 131666 00:26:24.656 [2024-11-05 17:05:13.399378] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:24.656 [2024-11-05 17:05:13.399532] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:24.656 [2024-11-05 17:05:13.399643] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:24.656 [2024-11-05 17:05:13.399808] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:26:24.914 [2024-11-05 17:05:13.721952] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:25.848 ************************************ 00:26:25.848 END TEST raid5f_rebuild_test_sb 00:26:25.848 ************************************ 00:26:25.848 17:05:14 -- bdev/bdev_raid.sh@711 -- # return 0 00:26:25.848 00:26:25.848 real 0m28.928s 00:26:25.848 user 0m43.944s 00:26:25.848 sys 0m3.163s 00:26:25.848 17:05:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:25.848 17:05:14 -- common/autotest_common.sh@10 -- # set +x 00:26:25.848 17:05:14 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:26:25.848 ************************************ 00:26:25.848 END TEST bdev_raid 00:26:25.848 ************************************ 00:26:25.848 00:26:25.848 real 12m4.358s 00:26:25.848 user 20m1.681s 00:26:25.848 sys 1m29.070s 00:26:25.848 17:05:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:25.848 17:05:14 -- common/autotest_common.sh@10 -- # set +x 00:26:25.848 17:05:14 -- spdk/autotest.sh@184 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:26:25.848 17:05:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:25.848 17:05:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:25.848 17:05:14 -- common/autotest_common.sh@10 -- # set +x 00:26:26.107 ************************************ 00:26:26.107 START TEST bdevperf_config 00:26:26.107 ************************************ 00:26:26.107 17:05:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:26:26.107 * Looking for test storage... 00:26:26.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:26:26.107 17:05:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:26.107 17:05:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:26.107 17:05:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:26.107 17:05:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:26.107 17:05:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:26.107 17:05:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:26.107 17:05:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:26.107 17:05:14 -- scripts/common.sh@335 -- # IFS=.-: 00:26:26.107 17:05:14 -- scripts/common.sh@335 -- # read -ra ver1 00:26:26.107 17:05:14 -- scripts/common.sh@336 -- # IFS=.-: 00:26:26.107 17:05:14 -- scripts/common.sh@336 -- # read -ra ver2 00:26:26.107 17:05:14 -- scripts/common.sh@337 -- # local 'op=<' 00:26:26.107 17:05:14 -- scripts/common.sh@339 -- # ver1_l=2 00:26:26.107 17:05:14 -- scripts/common.sh@340 -- # ver2_l=1 00:26:26.107 17:05:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:26.107 17:05:14 -- scripts/common.sh@343 -- # case "$op" in 00:26:26.107 17:05:14 -- scripts/common.sh@344 -- # : 1 00:26:26.107 17:05:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:26.107 17:05:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:26.107 17:05:14 -- scripts/common.sh@364 -- # decimal 1 00:26:26.107 17:05:14 -- scripts/common.sh@352 -- # local d=1 00:26:26.107 17:05:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:26.107 17:05:14 -- scripts/common.sh@354 -- # echo 1 00:26:26.107 17:05:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:26.107 17:05:14 -- scripts/common.sh@365 -- # decimal 2 00:26:26.107 17:05:14 -- scripts/common.sh@352 -- # local d=2 00:26:26.107 17:05:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:26.107 17:05:14 -- scripts/common.sh@354 -- # echo 2 00:26:26.107 17:05:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:26.107 17:05:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:26.107 17:05:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:26.107 17:05:14 -- scripts/common.sh@367 -- # return 0 00:26:26.107 17:05:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:26.107 17:05:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:26.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.107 --rc genhtml_branch_coverage=1 00:26:26.107 --rc genhtml_function_coverage=1 00:26:26.107 --rc genhtml_legend=1 00:26:26.107 --rc geninfo_all_blocks=1 00:26:26.107 --rc geninfo_unexecuted_blocks=1 00:26:26.107 00:26:26.107 ' 00:26:26.107 17:05:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:26.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.107 --rc genhtml_branch_coverage=1 00:26:26.107 --rc genhtml_function_coverage=1 00:26:26.107 --rc genhtml_legend=1 00:26:26.107 --rc geninfo_all_blocks=1 00:26:26.107 --rc geninfo_unexecuted_blocks=1 00:26:26.107 00:26:26.107 ' 00:26:26.107 17:05:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:26.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.107 --rc genhtml_branch_coverage=1 00:26:26.107 --rc genhtml_function_coverage=1 00:26:26.107 --rc genhtml_legend=1 00:26:26.107 --rc geninfo_all_blocks=1 00:26:26.107 --rc geninfo_unexecuted_blocks=1 00:26:26.107 00:26:26.107 ' 00:26:26.107 17:05:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:26.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.107 --rc genhtml_branch_coverage=1 00:26:26.107 --rc genhtml_function_coverage=1 00:26:26.107 --rc genhtml_legend=1 00:26:26.107 --rc geninfo_all_blocks=1 00:26:26.107 --rc geninfo_unexecuted_blocks=1 00:26:26.107 00:26:26.107 ' 00:26:26.107 17:05:14 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:26:26.107 17:05:14 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:26:26.107 17:05:14 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:26:26.107 17:05:14 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:26.107 17:05:14 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:26.107 17:05:14 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:26:26.107 17:05:14 -- bdevperf/common.sh@8 -- # local job_section=global 00:26:26.107 17:05:14 -- bdevperf/common.sh@9 -- # local rw=read 00:26:26.107 17:05:14 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:26.107 17:05:14 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:26.107 17:05:14 -- bdevperf/common.sh@13 -- # cat 00:26:26.107 17:05:14 -- bdevperf/common.sh@18 -- # job='[global]' 00:26:26.107 17:05:14 -- bdevperf/common.sh@19 -- # echo 00:26:26.107 00:26:26.107 17:05:14 -- bdevperf/common.sh@20 -- # cat 00:26:26.107 17:05:14 -- bdevperf/test_config.sh@18 -- # create_job job0 00:26:26.107 17:05:14 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:26.107 17:05:14 -- bdevperf/common.sh@9 -- # local rw= 00:26:26.107 17:05:14 -- bdevperf/common.sh@10 -- # local filename= 00:26:26.107 17:05:14 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:26.107 17:05:14 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:26.107 17:05:14 -- bdevperf/common.sh@19 -- # echo 00:26:26.107 00:26:26.107 17:05:14 -- bdevperf/common.sh@20 -- # cat 00:26:26.107 17:05:14 -- bdevperf/test_config.sh@19 -- # create_job job1 00:26:26.107 17:05:14 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:26.107 17:05:14 -- bdevperf/common.sh@9 -- # local rw= 00:26:26.107 17:05:14 -- bdevperf/common.sh@10 -- # local filename= 00:26:26.107 17:05:14 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:26.107 17:05:14 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:26.107 17:05:14 -- bdevperf/common.sh@19 -- # echo 00:26:26.107 00:26:26.107 17:05:14 -- bdevperf/common.sh@20 -- # cat 00:26:26.107 17:05:14 -- bdevperf/test_config.sh@20 -- # create_job job2 00:26:26.107 17:05:14 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:26.107 17:05:14 -- bdevperf/common.sh@9 -- # local rw= 00:26:26.107 17:05:14 -- bdevperf/common.sh@10 -- # local filename= 00:26:26.107 17:05:14 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:26.107 17:05:14 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:26.107 17:05:14 -- bdevperf/common.sh@19 -- # echo 00:26:26.107 00:26:26.107 17:05:14 -- bdevperf/common.sh@20 -- # cat 00:26:26.107 17:05:14 -- bdevperf/test_config.sh@21 -- # create_job job3 00:26:26.107 17:05:14 -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:26.107 17:05:14 -- bdevperf/common.sh@9 -- # local rw= 00:26:26.107 17:05:14 -- bdevperf/common.sh@10 -- # local filename= 00:26:26.107 17:05:14 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:26.107 17:05:14 -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:26.107 17:05:14 -- bdevperf/common.sh@19 -- # echo 00:26:26.107 00:26:26.108 17:05:14 -- bdevperf/common.sh@20 -- # cat 00:26:26.108 17:05:14 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:30.291 17:05:18 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-11-05 17:05:15.019869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:30.291 [2024-11-05 17:05:15.020069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132442 ] 00:26:30.291 Using job config with 4 jobs 00:26:30.291 [2024-11-05 17:05:15.188768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.291 [2024-11-05 17:05:15.369376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.291 cpumask for '\''job0'\'' is too big 00:26:30.291 cpumask for '\''job1'\'' is too big 00:26:30.291 cpumask for '\''job2'\'' is too big 00:26:30.291 cpumask for '\''job3'\'' is too big 00:26:30.291 Running I/O for 2 seconds... 00:26:30.291 00:26:30.291 Latency(us) 00:26:30.291 [2024-11-05T17:05:19.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.291 [2024-11-05T17:05:19.168Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:30.291 Malloc0 : 2.01 32974.71 32.20 0.00 0.00 7759.18 1414.98 11915.64 00:26:30.291 [2024-11-05T17:05:19.168Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:30.291 Malloc0 : 2.01 32952.98 32.18 0.00 0.00 7751.01 1340.51 10545.34 00:26:30.291 [2024-11-05T17:05:19.168Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:30.291 Malloc0 : 2.02 32994.84 32.22 0.00 0.00 7728.52 1385.19 9949.56 00:26:30.291 [2024-11-05T17:05:19.168Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:30.291 Malloc0 : 2.02 32973.46 32.20 0.00 0.00 7721.04 1392.64 9532.51 00:26:30.291 [2024-11-05T17:05:19.168Z] =================================================================================================================== 00:26:30.291 [2024-11-05T17:05:19.168Z] Total : 131896.00 128.80 0.00 0.00 7739.91 1340.51 11915.64' 00:26:30.291 17:05:18 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-11-05 17:05:15.019869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:30.291 [2024-11-05 17:05:15.020069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132442 ] 00:26:30.291 Using job config with 4 jobs 00:26:30.291 [2024-11-05 17:05:15.188768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.291 [2024-11-05 17:05:15.369376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.291 cpumask for '\''job0'\'' is too big 00:26:30.291 cpumask for '\''job1'\'' is too big 00:26:30.291 cpumask for '\''job2'\'' is too big 00:26:30.291 cpumask for '\''job3'\'' is too big 00:26:30.291 Running I/O for 2 seconds... 00:26:30.291 00:26:30.291 Latency(us) 00:26:30.291 [2024-11-05T17:05:19.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.291 [2024-11-05T17:05:19.168Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:30.291 Malloc0 : 2.01 32974.71 32.20 0.00 0.00 7759.18 1414.98 11915.64 00:26:30.291 [2024-11-05T17:05:19.168Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:30.291 Malloc0 : 2.01 32952.98 32.18 0.00 0.00 7751.01 1340.51 10545.34 00:26:30.291 [2024-11-05T17:05:19.168Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:30.291 Malloc0 : 2.02 32994.84 32.22 0.00 0.00 7728.52 1385.19 9949.56 00:26:30.291 [2024-11-05T17:05:19.168Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:30.291 Malloc0 : 2.02 32973.46 32.20 0.00 0.00 7721.04 1392.64 9532.51 00:26:30.291 [2024-11-05T17:05:19.168Z] =================================================================================================================== 00:26:30.291 [2024-11-05T17:05:19.168Z] Total : 131896.00 128.80 0.00 0.00 7739.91 1340.51 11915.64' 00:26:30.291 17:05:18 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:30.291 17:05:18 -- bdevperf/common.sh@32 -- # echo '[2024-11-05 17:05:15.019869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:30.291 [2024-11-05 17:05:15.020069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132442 ] 00:26:30.291 Using job config with 4 jobs 00:26:30.291 [2024-11-05 17:05:15.188768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.291 [2024-11-05 17:05:15.369376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.291 cpumask for '\''job0'\'' is too big 00:26:30.291 cpumask for '\''job1'\'' is too big 00:26:30.291 cpumask for '\''job2'\'' is too big 00:26:30.291 cpumask for '\''job3'\'' is too big 00:26:30.291 Running I/O for 2 seconds... 00:26:30.291 00:26:30.291 Latency(us) 00:26:30.291 [2024-11-05T17:05:19.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.291 [2024-11-05T17:05:19.168Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:30.291 Malloc0 : 2.01 32974.71 32.20 0.00 0.00 7759.18 1414.98 11915.64 00:26:30.291 [2024-11-05T17:05:19.168Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:30.291 Malloc0 : 2.01 32952.98 32.18 0.00 0.00 7751.01 1340.51 10545.34 00:26:30.291 [2024-11-05T17:05:19.168Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:30.291 Malloc0 : 2.02 32994.84 32.22 0.00 0.00 7728.52 1385.19 9949.56 00:26:30.291 [2024-11-05T17:05:19.168Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:30.291 Malloc0 : 2.02 32973.46 32.20 0.00 0.00 7721.04 1392.64 9532.51 00:26:30.291 [2024-11-05T17:05:19.168Z] =================================================================================================================== 00:26:30.291 [2024-11-05T17:05:19.168Z] Total : 131896.00 128.80 0.00 0.00 7739.91 1340.51 11915.64' 00:26:30.292 17:05:18 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:30.292 17:05:18 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:26:30.292 17:05:18 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:30.292 [2024-11-05 17:05:18.995052] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:30.292 [2024-11-05 17:05:18.995504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132496 ] 00:26:30.292 [2024-11-05 17:05:19.169900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.549 [2024-11-05 17:05:19.381463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.115 cpumask for 'job0' is too big 00:26:31.115 cpumask for 'job1' is too big 00:26:31.115 cpumask for 'job2' is too big 00:26:31.115 cpumask for 'job3' is too big 00:26:34.425 17:05:22 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:26:34.425 Running I/O for 2 seconds... 00:26:34.425 00:26:34.425 Latency(us) 00:26:34.425 [2024-11-05T17:05:23.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.425 [2024-11-05T17:05:23.302Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:34.425 Malloc0 : 2.01 33259.33 32.48 0.00 0.00 7690.82 1511.80 13166.78 00:26:34.425 [2024-11-05T17:05:23.302Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:34.425 Malloc0 : 2.02 33270.22 32.49 0.00 0.00 7674.68 1414.98 11558.17 00:26:34.425 [2024-11-05T17:05:23.302Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:34.425 Malloc0 : 2.02 33248.90 32.47 0.00 0.00 7664.69 1578.82 9770.82 00:26:34.425 [2024-11-05T17:05:23.302Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:34.425 Malloc0 : 2.02 33227.71 32.45 0.00 0.00 7655.87 1541.59 10009.13 00:26:34.425 [2024-11-05T17:05:23.302Z] =================================================================================================================== 00:26:34.425 [2024-11-05T17:05:23.302Z] Total : 133006.16 129.89 0.00 0.00 7671.50 1414.98 13166.78' 00:26:34.425 17:05:22 -- bdevperf/test_config.sh@27 -- # cleanup 00:26:34.425 17:05:22 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:34.425 17:05:22 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:26:34.425 17:05:22 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:34.425 17:05:22 -- bdevperf/common.sh@9 -- # local rw=write 00:26:34.425 17:05:22 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:34.425 17:05:22 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:34.425 17:05:22 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:34.425 00:26:34.425 17:05:22 -- bdevperf/common.sh@19 -- # echo 00:26:34.425 17:05:22 -- bdevperf/common.sh@20 -- # cat 00:26:34.425 17:05:22 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:26:34.425 00:26:34.425 17:05:22 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:34.425 17:05:22 -- bdevperf/common.sh@9 -- # local rw=write 00:26:34.425 17:05:22 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:34.425 17:05:22 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:34.425 17:05:22 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:34.425 17:05:22 -- bdevperf/common.sh@19 -- # echo 00:26:34.425 17:05:22 -- bdevperf/common.sh@20 -- # cat 00:26:34.425 17:05:22 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:26:34.425 17:05:22 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:34.425 17:05:22 -- bdevperf/common.sh@9 -- # local rw=write 00:26:34.425 17:05:22 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:34.425 17:05:22 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:34.425 17:05:22 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:34.425 17:05:22 -- bdevperf/common.sh@19 -- # echo 00:26:34.425 00:26:34.425 17:05:22 -- bdevperf/common.sh@20 -- # cat 00:26:34.425 17:05:22 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:38.608 17:05:26 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-11-05 17:05:23.009021] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:38.608 [2024-11-05 17:05:23.009801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132550 ] 00:26:38.608 Using job config with 3 jobs 00:26:38.609 [2024-11-05 17:05:23.180550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.609 [2024-11-05 17:05:23.356605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.609 cpumask for '\''job0'\'' is too big 00:26:38.609 cpumask for '\''job1'\'' is too big 00:26:38.609 cpumask for '\''job2'\'' is too big 00:26:38.609 Running I/O for 2 seconds... 00:26:38.609 00:26:38.609 Latency(us) 00:26:38.609 [2024-11-05T17:05:27.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.609 [2024-11-05T17:05:27.486Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:38.609 Malloc0 : 2.01 44490.92 43.45 0.00 0.00 5748.31 1377.75 8579.26 00:26:38.609 [2024-11-05T17:05:27.486Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:38.609 Malloc0 : 2.01 44461.85 43.42 0.00 0.00 5742.66 1325.61 7864.32 00:26:38.609 [2024-11-05T17:05:27.486Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:38.609 Malloc0 : 2.01 44513.00 43.47 0.00 0.00 5726.54 677.70 7923.90 00:26:38.609 [2024-11-05T17:05:27.486Z] =================================================================================================================== 00:26:38.609 [2024-11-05T17:05:27.486Z] Total : 133465.78 130.34 0.00 0.00 5739.16 677.70 8579.26' 00:26:38.609 17:05:26 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-11-05 17:05:23.009021] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:38.609 [2024-11-05 17:05:23.009801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132550 ] 00:26:38.609 Using job config with 3 jobs 00:26:38.609 [2024-11-05 17:05:23.180550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.609 [2024-11-05 17:05:23.356605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.609 cpumask for '\''job0'\'' is too big 00:26:38.609 cpumask for '\''job1'\'' is too big 00:26:38.609 cpumask for '\''job2'\'' is too big 00:26:38.609 Running I/O for 2 seconds... 00:26:38.609 00:26:38.609 Latency(us) 00:26:38.609 [2024-11-05T17:05:27.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.609 [2024-11-05T17:05:27.486Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:38.609 Malloc0 : 2.01 44490.92 43.45 0.00 0.00 5748.31 1377.75 8579.26 00:26:38.609 [2024-11-05T17:05:27.486Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:38.609 Malloc0 : 2.01 44461.85 43.42 0.00 0.00 5742.66 1325.61 7864.32 00:26:38.609 [2024-11-05T17:05:27.486Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:38.609 Malloc0 : 2.01 44513.00 43.47 0.00 0.00 5726.54 677.70 7923.90 00:26:38.609 [2024-11-05T17:05:27.486Z] =================================================================================================================== 00:26:38.609 [2024-11-05T17:05:27.486Z] Total : 133465.78 130.34 0.00 0.00 5739.16 677.70 8579.26' 00:26:38.609 17:05:26 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:38.609 17:05:26 -- bdevperf/common.sh@32 -- # echo '[2024-11-05 17:05:23.009021] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:38.609 [2024-11-05 17:05:23.009801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132550 ] 00:26:38.609 Using job config with 3 jobs 00:26:38.609 [2024-11-05 17:05:23.180550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.609 [2024-11-05 17:05:23.356605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.609 cpumask for '\''job0'\'' is too big 00:26:38.609 cpumask for '\''job1'\'' is too big 00:26:38.609 cpumask for '\''job2'\'' is too big 00:26:38.609 Running I/O for 2 seconds... 00:26:38.609 00:26:38.609 Latency(us) 00:26:38.609 [2024-11-05T17:05:27.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.609 [2024-11-05T17:05:27.486Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:38.609 Malloc0 : 2.01 44490.92 43.45 0.00 0.00 5748.31 1377.75 8579.26 00:26:38.609 [2024-11-05T17:05:27.486Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:38.609 Malloc0 : 2.01 44461.85 43.42 0.00 0.00 5742.66 1325.61 7864.32 00:26:38.609 [2024-11-05T17:05:27.486Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:38.609 Malloc0 : 2.01 44513.00 43.47 0.00 0.00 5726.54 677.70 7923.90 00:26:38.609 [2024-11-05T17:05:27.486Z] =================================================================================================================== 00:26:38.609 [2024-11-05T17:05:27.486Z] Total : 133465.78 130.34 0.00 0.00 5739.16 677.70 8579.26' 00:26:38.609 17:05:26 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:38.609 17:05:26 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:26:38.609 17:05:26 -- bdevperf/test_config.sh@35 -- # cleanup 00:26:38.609 17:05:26 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:38.609 17:05:26 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:26:38.609 17:05:26 -- bdevperf/common.sh@8 -- # local job_section=global 00:26:38.609 17:05:26 -- bdevperf/common.sh@9 -- # local rw=rw 00:26:38.609 17:05:26 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:26:38.609 17:05:26 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:38.609 17:05:26 -- bdevperf/common.sh@13 -- # cat 00:26:38.609 00:26:38.609 17:05:26 -- bdevperf/common.sh@18 -- # job='[global]' 00:26:38.609 17:05:26 -- bdevperf/common.sh@19 -- # echo 00:26:38.609 17:05:26 -- bdevperf/common.sh@20 -- # cat 00:26:38.609 17:05:26 -- bdevperf/test_config.sh@38 -- # create_job job0 00:26:38.609 17:05:26 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:38.609 17:05:26 -- bdevperf/common.sh@9 -- # local rw= 00:26:38.609 17:05:26 -- bdevperf/common.sh@10 -- # local filename= 00:26:38.609 17:05:26 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:38.609 17:05:26 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:38.609 17:05:26 -- bdevperf/common.sh@19 -- # echo 00:26:38.609 00:26:38.609 17:05:26 -- bdevperf/common.sh@20 -- # cat 00:26:38.609 17:05:26 -- bdevperf/test_config.sh@39 -- # create_job job1 00:26:38.609 17:05:26 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:38.609 17:05:26 -- bdevperf/common.sh@9 -- # local rw= 00:26:38.609 17:05:26 -- bdevperf/common.sh@10 -- # local filename= 00:26:38.609 17:05:26 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:38.609 17:05:26 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:38.609 17:05:26 -- bdevperf/common.sh@19 -- # echo 00:26:38.609 00:26:38.609 17:05:26 -- bdevperf/common.sh@20 -- # cat 00:26:38.609 17:05:26 -- bdevperf/test_config.sh@40 -- # create_job job2 00:26:38.609 17:05:26 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:38.609 17:05:26 -- bdevperf/common.sh@9 -- # local rw= 00:26:38.609 17:05:26 -- bdevperf/common.sh@10 -- # local filename= 00:26:38.609 17:05:26 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:38.609 17:05:26 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:38.609 17:05:26 -- bdevperf/common.sh@19 -- # echo 00:26:38.609 00:26:38.609 17:05:26 -- bdevperf/common.sh@20 -- # cat 00:26:38.609 17:05:26 -- bdevperf/test_config.sh@41 -- # create_job job3 00:26:38.609 17:05:26 -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:38.609 17:05:26 -- bdevperf/common.sh@9 -- # local rw= 00:26:38.609 17:05:26 -- bdevperf/common.sh@10 -- # local filename= 00:26:38.609 17:05:26 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:38.609 17:05:26 -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:38.609 17:05:26 -- bdevperf/common.sh@19 -- # echo 00:26:38.609 00:26:38.609 17:05:26 -- bdevperf/common.sh@20 -- # cat 00:26:38.609 17:05:26 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:42.795 17:05:30 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-11-05 17:05:27.000091] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:42.795 [2024-11-05 17:05:27.000282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132609 ] 00:26:42.795 Using job config with 4 jobs 00:26:42.795 [2024-11-05 17:05:27.170441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.795 [2024-11-05 17:05:27.386041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.795 cpumask for '\''job0'\'' is too big 00:26:42.795 cpumask for '\''job1'\'' is too big 00:26:42.795 cpumask for '\''job2'\'' is too big 00:26:42.795 cpumask for '\''job3'\'' is too big 00:26:42.795 Running I/O for 2 seconds... 00:26:42.795 00:26:42.795 Latency(us) 00:26:42.795 [2024-11-05T17:05:31.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.795 [2024-11-05T17:05:31.672Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.795 Malloc0 : 2.03 16555.80 16.17 0.00 0.00 15451.56 2859.75 24903.68 00:26:42.795 [2024-11-05T17:05:31.672Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.795 Malloc1 : 2.03 16543.67 16.16 0.00 0.00 15453.27 3589.59 24784.52 00:26:42.795 [2024-11-05T17:05:31.672Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.795 Malloc0 : 2.03 16531.21 16.14 0.00 0.00 15423.13 2934.23 21805.61 00:26:42.795 [2024-11-05T17:05:31.672Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.795 Malloc1 : 2.03 16519.04 16.13 0.00 0.00 15426.31 3515.11 21567.30 00:26:42.795 [2024-11-05T17:05:31.672Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.795 Malloc0 : 2.03 16505.97 16.12 0.00 0.00 15397.52 2993.80 18588.39 00:26:42.795 [2024-11-05T17:05:31.672Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.795 Malloc1 : 2.03 16494.80 16.11 0.00 0.00 15393.08 3366.17 18469.24 00:26:42.795 [2024-11-05T17:05:31.672Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.795 Malloc0 : 2.04 16576.74 16.19 0.00 0.00 15275.90 2606.55 17754.30 00:26:42.795 [2024-11-05T17:05:31.672Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.795 Malloc1 : 2.04 16564.32 16.18 0.00 0.00 15274.12 2219.29 17635.14 00:26:42.795 [2024-11-05T17:05:31.672Z] =================================================================================================================== 00:26:42.795 [2024-11-05T17:05:31.672Z] Total : 132291.55 129.19 0.00 0.00 15386.65 2219.29 24903.68' 00:26:42.795 17:05:30 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-11-05 17:05:27.000091] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:42.795 [2024-11-05 17:05:27.000282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132609 ] 00:26:42.795 Using job config with 4 jobs 00:26:42.795 [2024-11-05 17:05:27.170441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.795 [2024-11-05 17:05:27.386041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.795 cpumask for '\''job0'\'' is too big 00:26:42.795 cpumask for '\''job1'\'' is too big 00:26:42.795 cpumask for '\''job2'\'' is too big 00:26:42.795 cpumask for '\''job3'\'' is too big 00:26:42.795 Running I/O for 2 seconds... 00:26:42.795 00:26:42.795 Latency(us) 00:26:42.795 [2024-11-05T17:05:31.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.795 [2024-11-05T17:05:31.672Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.795 Malloc0 : 2.03 16555.80 16.17 0.00 0.00 15451.56 2859.75 24903.68 00:26:42.795 [2024-11-05T17:05:31.672Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.795 Malloc1 : 2.03 16543.67 16.16 0.00 0.00 15453.27 3589.59 24784.52 00:26:42.795 [2024-11-05T17:05:31.672Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.795 Malloc0 : 2.03 16531.21 16.14 0.00 0.00 15423.13 2934.23 21805.61 00:26:42.795 [2024-11-05T17:05:31.672Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.795 Malloc1 : 2.03 16519.04 16.13 0.00 0.00 15426.31 3515.11 21567.30 00:26:42.795 [2024-11-05T17:05:31.672Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.795 Malloc0 : 2.03 16505.97 16.12 0.00 0.00 15397.52 2993.80 18588.39 00:26:42.795 [2024-11-05T17:05:31.672Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.795 Malloc1 : 2.03 16494.80 16.11 0.00 0.00 15393.08 3366.17 18469.24 00:26:42.795 [2024-11-05T17:05:31.672Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.795 Malloc0 : 2.04 16576.74 16.19 0.00 0.00 15275.90 2606.55 17754.30 00:26:42.795 [2024-11-05T17:05:31.672Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.795 Malloc1 : 2.04 16564.32 16.18 0.00 0.00 15274.12 2219.29 17635.14 00:26:42.795 [2024-11-05T17:05:31.673Z] =================================================================================================================== 00:26:42.796 [2024-11-05T17:05:31.673Z] Total : 132291.55 129.19 0.00 0.00 15386.65 2219.29 24903.68' 00:26:42.796 17:05:30 -- bdevperf/common.sh@32 -- # echo '[2024-11-05 17:05:27.000091] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:42.796 [2024-11-05 17:05:27.000282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132609 ] 00:26:42.796 Using job config with 4 jobs 00:26:42.796 [2024-11-05 17:05:27.170441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.796 [2024-11-05 17:05:27.386041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.796 cpumask for '\''job0'\'' is too big 00:26:42.796 cpumask for '\''job1'\'' is too big 00:26:42.796 cpumask for '\''job2'\'' is too big 00:26:42.796 cpumask for '\''job3'\'' is too big 00:26:42.796 Running I/O for 2 seconds... 00:26:42.796 00:26:42.796 Latency(us) 00:26:42.796 [2024-11-05T17:05:31.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.796 [2024-11-05T17:05:31.673Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.796 Malloc0 : 2.03 16555.80 16.17 0.00 0.00 15451.56 2859.75 24903.68 00:26:42.796 [2024-11-05T17:05:31.673Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.796 Malloc1 : 2.03 16543.67 16.16 0.00 0.00 15453.27 3589.59 24784.52 00:26:42.796 [2024-11-05T17:05:31.673Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.796 Malloc0 : 2.03 16531.21 16.14 0.00 0.00 15423.13 2934.23 21805.61 00:26:42.796 [2024-11-05T17:05:31.673Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.796 Malloc1 : 2.03 16519.04 16.13 0.00 0.00 15426.31 3515.11 21567.30 00:26:42.796 [2024-11-05T17:05:31.673Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.796 Malloc0 : 2.03 16505.97 16.12 0.00 0.00 15397.52 2993.80 18588.39 00:26:42.796 [2024-11-05T17:05:31.673Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.796 Malloc1 : 2.03 16494.80 16.11 0.00 0.00 15393.08 3366.17 18469.24 00:26:42.796 [2024-11-05T17:05:31.673Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.796 Malloc0 : 2.04 16576.74 16.19 0.00 0.00 15275.90 2606.55 17754.30 00:26:42.796 [2024-11-05T17:05:31.673Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:42.796 Malloc1 : 2.04 16564.32 16.18 0.00 0.00 15274.12 2219.29 17635.14 00:26:42.796 [2024-11-05T17:05:31.673Z] =================================================================================================================== 00:26:42.796 [2024-11-05T17:05:31.673Z] Total : 132291.55 129.19 0.00 0.00 15386.65 2219.29 24903.68' 00:26:42.796 17:05:30 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:42.796 17:05:30 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:42.796 17:05:30 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:26:42.796 17:05:30 -- bdevperf/test_config.sh@44 -- # cleanup 00:26:42.796 17:05:30 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:42.796 17:05:30 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:42.796 00:26:42.796 real 0m16.217s 00:26:42.796 user 0m14.444s 00:26:42.796 sys 0m1.193s 00:26:42.796 ************************************ 00:26:42.796 END TEST bdevperf_config 00:26:42.796 ************************************ 00:26:42.796 17:05:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:42.796 17:05:30 -- common/autotest_common.sh@10 -- # set +x 00:26:42.796 17:05:30 -- spdk/autotest.sh@185 -- # uname -s 00:26:42.796 17:05:31 -- spdk/autotest.sh@185 -- # [[ Linux == Linux ]] 00:26:42.796 17:05:31 -- spdk/autotest.sh@186 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:42.796 17:05:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:42.796 17:05:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:42.796 17:05:31 -- common/autotest_common.sh@10 -- # set +x 00:26:42.796 ************************************ 00:26:42.796 START TEST reactor_set_interrupt 00:26:42.796 ************************************ 00:26:42.796 17:05:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:42.796 * Looking for test storage... 00:26:42.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:42.796 17:05:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:42.796 17:05:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:42.796 17:05:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:42.796 17:05:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:42.796 17:05:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:42.796 17:05:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:42.796 17:05:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:42.796 17:05:31 -- scripts/common.sh@335 -- # IFS=.-: 00:26:42.796 17:05:31 -- scripts/common.sh@335 -- # read -ra ver1 00:26:42.796 17:05:31 -- scripts/common.sh@336 -- # IFS=.-: 00:26:42.796 17:05:31 -- scripts/common.sh@336 -- # read -ra ver2 00:26:42.796 17:05:31 -- scripts/common.sh@337 -- # local 'op=<' 00:26:42.796 17:05:31 -- scripts/common.sh@339 -- # ver1_l=2 00:26:42.796 17:05:31 -- scripts/common.sh@340 -- # ver2_l=1 00:26:42.796 17:05:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:42.796 17:05:31 -- scripts/common.sh@343 -- # case "$op" in 00:26:42.796 17:05:31 -- scripts/common.sh@344 -- # : 1 00:26:42.796 17:05:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:42.796 17:05:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.796 17:05:31 -- scripts/common.sh@364 -- # decimal 1 00:26:42.796 17:05:31 -- scripts/common.sh@352 -- # local d=1 00:26:42.796 17:05:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:42.796 17:05:31 -- scripts/common.sh@354 -- # echo 1 00:26:42.796 17:05:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:42.796 17:05:31 -- scripts/common.sh@365 -- # decimal 2 00:26:42.796 17:05:31 -- scripts/common.sh@352 -- # local d=2 00:26:42.796 17:05:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:42.796 17:05:31 -- scripts/common.sh@354 -- # echo 2 00:26:42.796 17:05:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:42.796 17:05:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:42.796 17:05:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:42.796 17:05:31 -- scripts/common.sh@367 -- # return 0 00:26:42.796 17:05:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:42.796 17:05:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:42.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.796 --rc genhtml_branch_coverage=1 00:26:42.796 --rc genhtml_function_coverage=1 00:26:42.796 --rc genhtml_legend=1 00:26:42.796 --rc geninfo_all_blocks=1 00:26:42.796 --rc geninfo_unexecuted_blocks=1 00:26:42.796 00:26:42.796 ' 00:26:42.796 17:05:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:42.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.796 --rc genhtml_branch_coverage=1 00:26:42.796 --rc genhtml_function_coverage=1 00:26:42.796 --rc genhtml_legend=1 00:26:42.796 --rc geninfo_all_blocks=1 00:26:42.796 --rc geninfo_unexecuted_blocks=1 00:26:42.796 00:26:42.796 ' 00:26:42.796 17:05:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:42.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.796 --rc genhtml_branch_coverage=1 00:26:42.796 --rc genhtml_function_coverage=1 00:26:42.796 --rc genhtml_legend=1 00:26:42.796 --rc geninfo_all_blocks=1 00:26:42.796 --rc geninfo_unexecuted_blocks=1 00:26:42.796 00:26:42.796 ' 00:26:42.796 17:05:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:42.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.796 --rc genhtml_branch_coverage=1 00:26:42.796 --rc genhtml_function_coverage=1 00:26:42.796 --rc genhtml_legend=1 00:26:42.796 --rc geninfo_all_blocks=1 00:26:42.796 --rc geninfo_unexecuted_blocks=1 00:26:42.796 00:26:42.796 ' 00:26:42.796 17:05:31 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:26:42.796 17:05:31 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:42.796 17:05:31 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:42.796 17:05:31 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:42.796 17:05:31 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:26:42.796 17:05:31 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:42.796 17:05:31 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:42.796 17:05:31 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:42.796 17:05:31 -- common/autotest_common.sh@34 -- # set -e 00:26:42.797 17:05:31 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:42.797 17:05:31 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:42.797 17:05:31 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:42.797 17:05:31 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:42.797 17:05:31 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:26:42.797 17:05:31 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:26:42.797 17:05:31 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:26:42.797 17:05:31 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:42.797 17:05:31 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:26:42.797 17:05:31 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:26:42.797 17:05:31 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:26:42.797 17:05:31 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:26:42.797 17:05:31 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:26:42.797 17:05:31 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:26:42.797 17:05:31 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:26:42.797 17:05:31 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:26:42.797 17:05:31 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:26:42.797 17:05:31 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:26:42.797 17:05:31 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:42.797 17:05:31 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:26:42.797 17:05:31 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:26:42.797 17:05:31 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:42.797 17:05:31 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:42.797 17:05:31 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:26:42.797 17:05:31 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:26:42.797 17:05:31 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:26:42.797 17:05:31 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:42.797 17:05:31 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:26:42.797 17:05:31 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:26:42.797 17:05:31 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:26:42.797 17:05:31 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:42.797 17:05:31 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:26:42.797 17:05:31 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:26:42.797 17:05:31 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:26:42.797 17:05:31 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:26:42.797 17:05:31 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:26:42.797 17:05:31 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:26:42.797 17:05:31 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:26:42.797 17:05:31 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:26:42.797 17:05:31 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:26:42.797 17:05:31 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:26:42.797 17:05:31 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:26:42.797 17:05:31 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:26:42.797 17:05:31 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:26:42.797 17:05:31 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:26:42.797 17:05:31 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:26:42.797 17:05:31 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:26:42.797 17:05:31 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:42.797 17:05:31 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:26:42.797 17:05:31 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:26:42.797 17:05:31 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:26:42.797 17:05:31 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:42.797 17:05:31 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:26:42.797 17:05:31 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:26:42.797 17:05:31 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:26:42.797 17:05:31 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:26:42.797 17:05:31 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:26:42.797 17:05:31 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:26:42.797 17:05:31 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:26:42.797 17:05:31 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:26:42.797 17:05:31 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:26:42.797 17:05:31 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:26:42.797 17:05:31 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:26:42.797 17:05:31 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:26:42.797 17:05:31 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:26:42.797 17:05:31 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:26:42.797 17:05:31 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:26:42.797 17:05:31 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:26:42.797 17:05:31 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:26:42.797 17:05:31 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:42.797 17:05:31 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:26:42.797 17:05:31 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:26:42.797 17:05:31 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:26:42.797 17:05:31 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:26:42.797 17:05:31 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:26:42.797 17:05:31 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:26:42.797 17:05:31 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:26:42.797 17:05:31 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:26:42.797 17:05:31 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:26:42.797 17:05:31 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:26:42.797 17:05:31 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:42.797 17:05:31 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:26:42.797 17:05:31 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:26:42.797 17:05:31 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:42.797 17:05:31 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:42.797 17:05:31 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:42.797 17:05:31 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:42.797 17:05:31 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:42.797 17:05:31 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:42.797 17:05:31 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:42.797 17:05:31 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:42.797 17:05:31 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:42.797 17:05:31 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:42.797 17:05:31 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:42.797 17:05:31 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:42.797 17:05:31 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:42.797 17:05:31 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:42.797 17:05:31 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:42.797 17:05:31 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:42.797 #define SPDK_CONFIG_H 00:26:42.797 #define SPDK_CONFIG_APPS 1 00:26:42.797 #define SPDK_CONFIG_ARCH native 00:26:42.797 #define SPDK_CONFIG_ASAN 1 00:26:42.797 #undef SPDK_CONFIG_AVAHI 00:26:42.797 #undef SPDK_CONFIG_CET 00:26:42.797 #define SPDK_CONFIG_COVERAGE 1 00:26:42.797 #define SPDK_CONFIG_CROSS_PREFIX 00:26:42.797 #undef SPDK_CONFIG_CRYPTO 00:26:42.797 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:42.797 #undef SPDK_CONFIG_CUSTOMOCF 00:26:42.797 #undef SPDK_CONFIG_DAOS 00:26:42.797 #define SPDK_CONFIG_DAOS_DIR 00:26:42.797 #define SPDK_CONFIG_DEBUG 1 00:26:42.797 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:42.797 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:42.797 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:42.797 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:42.797 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:42.797 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:42.797 #define SPDK_CONFIG_EXAMPLES 1 00:26:42.797 #undef SPDK_CONFIG_FC 00:26:42.797 #define SPDK_CONFIG_FC_PATH 00:26:42.797 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:42.797 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:42.797 #undef SPDK_CONFIG_FUSE 00:26:42.797 #undef SPDK_CONFIG_FUZZER 00:26:42.797 #define SPDK_CONFIG_FUZZER_LIB 00:26:42.797 #undef SPDK_CONFIG_GOLANG 00:26:42.797 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:26:42.797 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:42.797 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:42.797 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:42.797 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:42.797 #define SPDK_CONFIG_IDXD 1 00:26:42.797 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:42.797 #undef SPDK_CONFIG_IPSEC_MB 00:26:42.797 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:42.797 #define SPDK_CONFIG_ISAL 1 00:26:42.797 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:42.797 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:42.797 #define SPDK_CONFIG_LIBDIR 00:26:42.797 #undef SPDK_CONFIG_LTO 00:26:42.797 #define SPDK_CONFIG_MAX_LCORES 00:26:42.797 #define SPDK_CONFIG_NVME_CUSE 1 00:26:42.797 #undef SPDK_CONFIG_OCF 00:26:42.797 #define SPDK_CONFIG_OCF_PATH 00:26:42.797 #define SPDK_CONFIG_OPENSSL_PATH 00:26:42.797 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:42.797 #undef SPDK_CONFIG_PGO_USE 00:26:42.797 #define SPDK_CONFIG_PREFIX /usr/local 00:26:42.797 #define SPDK_CONFIG_RAID5F 1 00:26:42.797 #undef SPDK_CONFIG_RBD 00:26:42.797 #define SPDK_CONFIG_RDMA 1 00:26:42.797 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:42.798 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:42.798 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:42.798 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:42.798 #undef SPDK_CONFIG_SHARED 00:26:42.798 #undef SPDK_CONFIG_SMA 00:26:42.798 #define SPDK_CONFIG_TESTS 1 00:26:42.798 #undef SPDK_CONFIG_TSAN 00:26:42.798 #undef SPDK_CONFIG_UBLK 00:26:42.798 #define SPDK_CONFIG_UBSAN 1 00:26:42.798 #define SPDK_CONFIG_UNIT_TESTS 1 00:26:42.798 #undef SPDK_CONFIG_URING 00:26:42.798 #define SPDK_CONFIG_URING_PATH 00:26:42.798 #undef SPDK_CONFIG_URING_ZNS 00:26:42.798 #undef SPDK_CONFIG_USDT 00:26:42.798 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:42.798 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:42.798 #undef SPDK_CONFIG_VFIO_USER 00:26:42.798 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:42.798 #define SPDK_CONFIG_VHOST 1 00:26:42.798 #define SPDK_CONFIG_VIRTIO 1 00:26:42.798 #undef SPDK_CONFIG_VTUNE 00:26:42.798 #define SPDK_CONFIG_VTUNE_DIR 00:26:42.798 #define SPDK_CONFIG_WERROR 1 00:26:42.798 #define SPDK_CONFIG_WPDK_DIR 00:26:42.798 #undef SPDK_CONFIG_XNVME 00:26:42.798 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:42.798 17:05:31 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:42.798 17:05:31 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:42.798 17:05:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.798 17:05:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.798 17:05:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.798 17:05:31 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:42.798 17:05:31 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:42.798 17:05:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:42.798 17:05:31 -- paths/export.sh@5 -- # export PATH 00:26:42.798 17:05:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:42.798 17:05:31 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:42.798 17:05:31 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:42.798 17:05:31 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:42.798 17:05:31 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:42.798 17:05:31 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:42.798 17:05:31 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:42.798 17:05:31 -- pm/common@16 -- # TEST_TAG=N/A 00:26:42.798 17:05:31 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:42.798 17:05:31 -- common/autotest_common.sh@52 -- # : 1 00:26:42.798 17:05:31 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:26:42.798 17:05:31 -- common/autotest_common.sh@56 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:42.798 17:05:31 -- common/autotest_common.sh@58 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:26:42.798 17:05:31 -- common/autotest_common.sh@60 -- # : 1 00:26:42.798 17:05:31 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:42.798 17:05:31 -- common/autotest_common.sh@62 -- # : 1 00:26:42.798 17:05:31 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:26:42.798 17:05:31 -- common/autotest_common.sh@64 -- # : 00:26:42.798 17:05:31 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:26:42.798 17:05:31 -- common/autotest_common.sh@66 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:26:42.798 17:05:31 -- common/autotest_common.sh@68 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:26:42.798 17:05:31 -- common/autotest_common.sh@70 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:26:42.798 17:05:31 -- common/autotest_common.sh@72 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:42.798 17:05:31 -- common/autotest_common.sh@74 -- # : 1 00:26:42.798 17:05:31 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:26:42.798 17:05:31 -- common/autotest_common.sh@76 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:26:42.798 17:05:31 -- common/autotest_common.sh@78 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:26:42.798 17:05:31 -- common/autotest_common.sh@80 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:26:42.798 17:05:31 -- common/autotest_common.sh@82 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:26:42.798 17:05:31 -- common/autotest_common.sh@84 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:26:42.798 17:05:31 -- common/autotest_common.sh@86 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:26:42.798 17:05:31 -- common/autotest_common.sh@88 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:26:42.798 17:05:31 -- common/autotest_common.sh@90 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:42.798 17:05:31 -- common/autotest_common.sh@92 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:26:42.798 17:05:31 -- common/autotest_common.sh@94 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:26:42.798 17:05:31 -- common/autotest_common.sh@96 -- # : rdma 00:26:42.798 17:05:31 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:42.798 17:05:31 -- common/autotest_common.sh@98 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:26:42.798 17:05:31 -- common/autotest_common.sh@100 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:26:42.798 17:05:31 -- common/autotest_common.sh@102 -- # : 1 00:26:42.798 17:05:31 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:26:42.798 17:05:31 -- common/autotest_common.sh@104 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:26:42.798 17:05:31 -- common/autotest_common.sh@106 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:26:42.798 17:05:31 -- common/autotest_common.sh@108 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:26:42.798 17:05:31 -- common/autotest_common.sh@110 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:26:42.798 17:05:31 -- common/autotest_common.sh@112 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:42.798 17:05:31 -- common/autotest_common.sh@114 -- # : 1 00:26:42.798 17:05:31 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:26:42.798 17:05:31 -- common/autotest_common.sh@116 -- # : 1 00:26:42.798 17:05:31 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:26:42.798 17:05:31 -- common/autotest_common.sh@118 -- # : 00:26:42.798 17:05:31 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:42.798 17:05:31 -- common/autotest_common.sh@120 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:26:42.798 17:05:31 -- common/autotest_common.sh@122 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:26:42.798 17:05:31 -- common/autotest_common.sh@124 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:26:42.798 17:05:31 -- common/autotest_common.sh@126 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:26:42.798 17:05:31 -- common/autotest_common.sh@128 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:26:42.798 17:05:31 -- common/autotest_common.sh@130 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:26:42.798 17:05:31 -- common/autotest_common.sh@132 -- # : 00:26:42.798 17:05:31 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:26:42.798 17:05:31 -- common/autotest_common.sh@134 -- # : true 00:26:42.798 17:05:31 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:26:42.798 17:05:31 -- common/autotest_common.sh@136 -- # : 1 00:26:42.798 17:05:31 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:26:42.798 17:05:31 -- common/autotest_common.sh@138 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:26:42.798 17:05:31 -- common/autotest_common.sh@140 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:26:42.798 17:05:31 -- common/autotest_common.sh@142 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:26:42.798 17:05:31 -- common/autotest_common.sh@144 -- # : 0 00:26:42.798 17:05:31 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:26:42.798 17:05:31 -- common/autotest_common.sh@146 -- # : 0 00:26:42.799 17:05:31 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:26:42.799 17:05:31 -- common/autotest_common.sh@148 -- # : 00:26:42.799 17:05:31 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:26:42.799 17:05:31 -- common/autotest_common.sh@150 -- # : 0 00:26:42.799 17:05:31 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:26:42.799 17:05:31 -- common/autotest_common.sh@152 -- # : 0 00:26:42.799 17:05:31 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:26:42.799 17:05:31 -- common/autotest_common.sh@154 -- # : 0 00:26:42.799 17:05:31 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:26:42.799 17:05:31 -- common/autotest_common.sh@156 -- # : 0 00:26:42.799 17:05:31 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:26:42.799 17:05:31 -- common/autotest_common.sh@158 -- # : 0 00:26:42.799 17:05:31 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:26:42.799 17:05:31 -- common/autotest_common.sh@160 -- # : 0 00:26:42.799 17:05:31 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:26:42.799 17:05:31 -- common/autotest_common.sh@163 -- # : 00:26:42.799 17:05:31 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:26:42.799 17:05:31 -- common/autotest_common.sh@165 -- # : 0 00:26:42.799 17:05:31 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:26:42.799 17:05:31 -- common/autotest_common.sh@167 -- # : 0 00:26:42.799 17:05:31 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:42.799 17:05:31 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:42.799 17:05:31 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:42.799 17:05:31 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:42.799 17:05:31 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:42.799 17:05:31 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:42.799 17:05:31 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:42.799 17:05:31 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:42.799 17:05:31 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:42.799 17:05:31 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:42.799 17:05:31 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:42.799 17:05:31 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:42.799 17:05:31 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:42.799 17:05:31 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:42.799 17:05:31 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:26:42.799 17:05:31 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:42.799 17:05:31 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:42.799 17:05:31 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:42.799 17:05:31 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:42.799 17:05:31 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:42.799 17:05:31 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:26:42.799 17:05:31 -- common/autotest_common.sh@196 -- # cat 00:26:42.799 17:05:31 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:26:42.799 17:05:31 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:42.799 17:05:31 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:42.799 17:05:31 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:42.799 17:05:31 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:42.799 17:05:31 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:26:42.799 17:05:31 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:26:42.799 17:05:31 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:42.799 17:05:31 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:42.799 17:05:31 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:42.799 17:05:31 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:42.799 17:05:31 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:26:42.799 17:05:31 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:26:42.799 17:05:31 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:42.799 17:05:31 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:42.799 17:05:31 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:42.799 17:05:31 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:42.799 17:05:31 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:42.799 17:05:31 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:42.799 17:05:31 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:26:42.799 17:05:31 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:26:42.799 17:05:31 -- common/autotest_common.sh@249 -- # _LCOV= 00:26:42.799 17:05:31 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:26:42.799 17:05:31 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:26:42.799 17:05:31 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:26:42.799 17:05:31 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:26:42.799 17:05:31 -- common/autotest_common.sh@255 -- # lcov_opt= 00:26:42.799 17:05:31 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:26:42.799 17:05:31 -- common/autotest_common.sh@259 -- # export valgrind= 00:26:42.799 17:05:31 -- common/autotest_common.sh@259 -- # valgrind= 00:26:42.799 17:05:31 -- common/autotest_common.sh@265 -- # uname -s 00:26:42.799 17:05:31 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:26:42.799 17:05:31 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:26:42.799 17:05:31 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:26:42.799 17:05:31 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:26:42.799 17:05:31 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:26:42.799 17:05:31 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:26:42.799 17:05:31 -- common/autotest_common.sh@275 -- # MAKE=make 00:26:42.799 17:05:31 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:26:42.799 17:05:31 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:26:42.799 17:05:31 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:26:42.799 17:05:31 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:42.799 17:05:31 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:26:42.799 17:05:31 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:26:42.799 17:05:31 -- common/autotest_common.sh@319 -- # [[ -z 132700 ]] 00:26:42.799 17:05:31 -- common/autotest_common.sh@319 -- # kill -0 132700 00:26:42.799 17:05:31 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:26:42.799 17:05:31 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:26:42.799 17:05:31 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:26:42.799 17:05:31 -- common/autotest_common.sh@332 -- # local mount target_dir 00:26:42.799 17:05:31 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:26:42.799 17:05:31 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:26:42.799 17:05:31 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:26:42.799 17:05:31 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:26:42.799 17:05:31 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.rnfhtL 00:26:42.799 17:05:31 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:42.799 17:05:31 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:26:42.799 17:05:31 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:26:42.799 17:05:31 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.rnfhtL/tests/interrupt /tmp/spdk.rnfhtL 00:26:42.799 17:05:31 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:26:42.799 17:05:31 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:42.799 17:05:31 -- common/autotest_common.sh@328 -- # df -T 00:26:42.799 17:05:31 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:26:42.799 17:05:31 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:26:42.799 17:05:31 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:26:42.799 17:05:31 -- common/autotest_common.sh@363 -- # avails["$mount"]=1248956416 00:26:42.799 17:05:31 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253683200 00:26:42.799 17:05:31 -- common/autotest_common.sh@364 -- # uses["$mount"]=4726784 00:26:42.799 17:05:31 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:42.799 17:05:31 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1 00:26:42.799 17:05:31 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:26:42.799 17:05:31 -- common/autotest_common.sh@363 -- # avails["$mount"]=10293694464 00:26:42.799 17:05:31 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20616794112 00:26:42.800 17:05:31 -- common/autotest_common.sh@364 -- # uses["$mount"]=10306322432 00:26:42.800 17:05:31 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:42.800 17:05:31 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:26:42.800 17:05:31 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:26:42.800 17:05:31 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265810944 00:26:42.800 17:05:31 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6268403712 00:26:42.800 17:05:31 -- common/autotest_common.sh@364 -- # uses["$mount"]=2592768 00:26:42.800 17:05:31 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:42.800 17:05:31 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:26:42.800 17:05:31 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:26:42.800 17:05:31 -- common/autotest_common.sh@363 -- # avails["$mount"]=5242880 00:26:42.800 17:05:31 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880 00:26:42.800 17:05:31 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:26:42.800 17:05:31 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:42.800 17:05:31 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15 00:26:42.800 17:05:31 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:26:42.800 17:05:31 -- common/autotest_common.sh@363 -- # avails["$mount"]=103061504 00:26:42.800 17:05:31 -- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968 00:26:42.800 17:05:31 -- common/autotest_common.sh@364 -- # uses["$mount"]=6334464 00:26:42.800 17:05:31 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:42.800 17:05:31 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:26:42.800 17:05:31 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:26:42.800 17:05:31 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253675008 00:26:42.800 17:05:31 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253679104 00:26:42.800 17:05:31 -- common/autotest_common.sh@364 -- # uses["$mount"]=4096 00:26:42.800 17:05:31 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:42.800 17:05:31 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output 00:26:42.800 17:05:31 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:26:42.800 17:05:31 -- common/autotest_common.sh@363 -- # avails["$mount"]=98692923392 00:26:42.800 17:05:31 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:26:42.800 17:05:31 -- common/autotest_common.sh@364 -- # uses["$mount"]=1009856512 00:26:42.800 17:05:31 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:42.800 17:05:31 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:26:42.800 * Looking for test storage... 00:26:42.800 17:05:31 -- common/autotest_common.sh@369 -- # local target_space new_size 00:26:42.800 17:05:31 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:26:42.800 17:05:31 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:42.800 17:05:31 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:42.800 17:05:31 -- common/autotest_common.sh@373 -- # mount=/ 00:26:42.800 17:05:31 -- common/autotest_common.sh@375 -- # target_space=10293694464 00:26:42.800 17:05:31 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:26:42.800 17:05:31 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:26:42.800 17:05:31 -- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]] 00:26:42.800 17:05:31 -- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]] 00:26:42.800 17:05:31 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:26:42.800 17:05:31 -- common/autotest_common.sh@382 -- # new_size=12520914944 00:26:42.800 17:05:31 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:42.800 17:05:31 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:42.800 17:05:31 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:42.800 17:05:31 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:42.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:42.800 17:05:31 -- common/autotest_common.sh@390 -- # return 0 00:26:42.800 17:05:31 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:26:42.800 17:05:31 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:26:42.800 17:05:31 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:42.800 17:05:31 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:42.800 17:05:31 -- common/autotest_common.sh@1682 -- # true 00:26:42.800 17:05:31 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:26:42.800 17:05:31 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:42.800 17:05:31 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:42.800 17:05:31 -- common/autotest_common.sh@27 -- # exec 00:26:42.800 17:05:31 -- common/autotest_common.sh@29 -- # exec 00:26:42.800 17:05:31 -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:42.800 17:05:31 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:42.800 17:05:31 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:42.800 17:05:31 -- common/autotest_common.sh@18 -- # set -x 00:26:42.800 17:05:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:42.800 17:05:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:42.800 17:05:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:42.800 17:05:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:42.800 17:05:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:42.800 17:05:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:42.800 17:05:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:42.800 17:05:31 -- scripts/common.sh@335 -- # IFS=.-: 00:26:42.800 17:05:31 -- scripts/common.sh@335 -- # read -ra ver1 00:26:42.800 17:05:31 -- scripts/common.sh@336 -- # IFS=.-: 00:26:42.800 17:05:31 -- scripts/common.sh@336 -- # read -ra ver2 00:26:42.800 17:05:31 -- scripts/common.sh@337 -- # local 'op=<' 00:26:42.800 17:05:31 -- scripts/common.sh@339 -- # ver1_l=2 00:26:42.800 17:05:31 -- scripts/common.sh@340 -- # ver2_l=1 00:26:42.800 17:05:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:42.800 17:05:31 -- scripts/common.sh@343 -- # case "$op" in 00:26:42.800 17:05:31 -- scripts/common.sh@344 -- # : 1 00:26:42.800 17:05:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:42.800 17:05:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.800 17:05:31 -- scripts/common.sh@364 -- # decimal 1 00:26:42.800 17:05:31 -- scripts/common.sh@352 -- # local d=1 00:26:42.800 17:05:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:42.800 17:05:31 -- scripts/common.sh@354 -- # echo 1 00:26:42.800 17:05:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:42.800 17:05:31 -- scripts/common.sh@365 -- # decimal 2 00:26:42.800 17:05:31 -- scripts/common.sh@352 -- # local d=2 00:26:42.800 17:05:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:42.800 17:05:31 -- scripts/common.sh@354 -- # echo 2 00:26:42.800 17:05:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:42.800 17:05:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:42.800 17:05:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:42.800 17:05:31 -- scripts/common.sh@367 -- # return 0 00:26:42.800 17:05:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:42.800 17:05:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:42.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.800 --rc genhtml_branch_coverage=1 00:26:42.800 --rc genhtml_function_coverage=1 00:26:42.800 --rc genhtml_legend=1 00:26:42.800 --rc geninfo_all_blocks=1 00:26:42.800 --rc geninfo_unexecuted_blocks=1 00:26:42.800 00:26:42.800 ' 00:26:42.800 17:05:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:42.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.800 --rc genhtml_branch_coverage=1 00:26:42.800 --rc genhtml_function_coverage=1 00:26:42.800 --rc genhtml_legend=1 00:26:42.800 --rc geninfo_all_blocks=1 00:26:42.800 --rc geninfo_unexecuted_blocks=1 00:26:42.800 00:26:42.800 ' 00:26:42.800 17:05:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:42.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.800 --rc genhtml_branch_coverage=1 00:26:42.800 --rc genhtml_function_coverage=1 00:26:42.800 --rc genhtml_legend=1 00:26:42.800 --rc geninfo_all_blocks=1 00:26:42.800 --rc geninfo_unexecuted_blocks=1 00:26:42.800 00:26:42.800 ' 00:26:42.800 17:05:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:42.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.800 --rc genhtml_branch_coverage=1 00:26:42.801 --rc genhtml_function_coverage=1 00:26:42.801 --rc genhtml_legend=1 00:26:42.801 --rc geninfo_all_blocks=1 00:26:42.801 --rc geninfo_unexecuted_blocks=1 00:26:42.801 00:26:42.801 ' 00:26:42.801 17:05:31 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:42.801 17:05:31 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:26:42.801 17:05:31 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:26:42.801 17:05:31 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:26:42.801 17:05:31 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:26:42.801 17:05:31 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:42.801 17:05:31 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:42.801 17:05:31 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:42.801 17:05:31 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:26:42.801 17:05:31 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.801 17:05:31 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:42.801 17:05:31 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=132759 00:26:42.801 17:05:31 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:42.801 17:05:31 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:42.801 17:05:31 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 132759 /var/tmp/spdk.sock 00:26:42.801 17:05:31 -- common/autotest_common.sh@829 -- # '[' -z 132759 ']' 00:26:42.801 17:05:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.801 17:05:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:42.801 17:05:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.801 17:05:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:42.801 17:05:31 -- common/autotest_common.sh@10 -- # set +x 00:26:42.801 [2024-11-05 17:05:31.547148] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:42.801 [2024-11-05 17:05:31.547569] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132759 ] 00:26:43.059 [2024-11-05 17:05:31.715507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:43.059 [2024-11-05 17:05:31.942670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.059 [2024-11-05 17:05:31.942736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.059 [2024-11-05 17:05:31.942736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.317 [2024-11-05 17:05:32.192385] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:43.883 17:05:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:43.883 17:05:32 -- common/autotest_common.sh@862 -- # return 0 00:26:43.883 17:05:32 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:26:43.883 17:05:32 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:44.140 Malloc0 00:26:44.140 Malloc1 00:26:44.140 Malloc2 00:26:44.140 17:05:32 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:26:44.140 17:05:32 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:44.140 17:05:32 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:44.140 17:05:32 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:44.140 5000+0 records in 00:26:44.140 5000+0 records out 00:26:44.140 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0260951 s, 392 MB/s 00:26:44.140 17:05:32 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:44.398 AIO0 00:26:44.398 17:05:33 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 132759 00:26:44.398 17:05:33 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 132759 without_thd 00:26:44.398 17:05:33 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=132759 00:26:44.398 17:05:33 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:26:44.399 17:05:33 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:44.399 17:05:33 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:44.399 17:05:33 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:26:44.399 17:05:33 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:44.399 17:05:33 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:26:44.399 17:05:33 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:44.399 17:05:33 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:44.399 17:05:33 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:44.657 17:05:33 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:26:44.657 17:05:33 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:44.657 17:05:33 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:44.657 17:05:33 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:26:44.657 17:05:33 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:44.657 17:05:33 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:26:44.657 17:05:33 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:44.657 17:05:33 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:44.657 17:05:33 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:26:44.915 spdk_thread ids are 1 on reactor0. 00:26:44.915 17:05:33 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:44.915 17:05:33 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:44.915 17:05:33 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:44.915 17:05:33 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 132759 0 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132759 0 idle 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@33 -- # local pid=132759 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132759 -w 256 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132759 root 20 0 20.1t 144744 28924 S 0.0 1.2 0:00.69 reactor_0' 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@48 -- # echo 132759 root 20 0 20.1t 144744 28924 S 0.0 1.2 0:00.69 reactor_0 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:44.915 17:05:33 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:44.915 17:05:33 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 132759 1 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132759 1 idle 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@33 -- # local pid=132759 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:44.915 17:05:33 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:44.916 17:05:33 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:44.916 17:05:33 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132759 -w 256 00:26:44.916 17:05:33 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132762 root 20 0 20.1t 144744 28924 S 0.0 1.2 0:00.00 reactor_1' 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@48 -- # echo 132762 root 20 0 20.1t 144744 28924 S 0.0 1.2 0:00.00 reactor_1 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:45.174 17:05:33 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:45.174 17:05:33 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 132759 2 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132759 2 idle 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@33 -- # local pid=132759 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132759 -w 256 00:26:45.174 17:05:33 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:45.432 17:05:34 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132763 root 20 0 20.1t 144744 28924 S 0.0 1.2 0:00.00 reactor_2' 00:26:45.432 17:05:34 -- interrupt/interrupt_common.sh@48 -- # echo 132763 root 20 0 20.1t 144744 28924 S 0.0 1.2 0:00.00 reactor_2 00:26:45.432 17:05:34 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:45.432 17:05:34 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:45.432 17:05:34 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:45.432 17:05:34 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:45.432 17:05:34 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:45.432 17:05:34 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:45.432 17:05:34 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:45.432 17:05:34 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:45.432 17:05:34 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:26:45.432 17:05:34 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:26:45.432 17:05:34 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:26:45.691 [2024-11-05 17:05:34.384956] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:45.691 17:05:34 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:45.691 [2024-11-05 17:05:34.572528] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:45.691 [2024-11-05 17:05:34.573025] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:45.691 17:05:34 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:45.949 [2024-11-05 17:05:34.828539] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:45.949 [2024-11-05 17:05:34.829051] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:45.949 17:05:34 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:45.949 17:05:34 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 132759 0 00:26:45.949 17:05:34 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 132759 0 busy 00:26:45.949 17:05:34 -- interrupt/interrupt_common.sh@33 -- # local pid=132759 00:26:45.949 17:05:34 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:45.949 17:05:34 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:45.949 17:05:34 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:45.949 17:05:34 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:46.206 17:05:34 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:46.207 17:05:34 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:46.207 17:05:34 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132759 -w 256 00:26:46.207 17:05:34 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132759 root 20 0 20.1t 144856 28924 R 99.9 1.2 0:01.14 reactor_0' 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@48 -- # echo 132759 root 20 0 20.1t 144856 28924 R 99.9 1.2 0:01.14 reactor_0 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:46.207 17:05:35 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:46.207 17:05:35 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 132759 2 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 132759 2 busy 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@33 -- # local pid=132759 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132759 -w 256 00:26:46.207 17:05:35 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:46.464 17:05:35 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132763 root 20 0 20.1t 144856 28924 R 99.9 1.2 0:00.35 reactor_2' 00:26:46.464 17:05:35 -- interrupt/interrupt_common.sh@48 -- # echo 132763 root 20 0 20.1t 144856 28924 R 99.9 1.2 0:00.35 reactor_2 00:26:46.464 17:05:35 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:46.464 17:05:35 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:46.464 17:05:35 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:46.464 17:05:35 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:46.464 17:05:35 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:46.464 17:05:35 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:46.464 17:05:35 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:46.464 17:05:35 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:46.464 17:05:35 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:46.721 [2024-11-05 17:05:35.368638] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:46.721 [2024-11-05 17:05:35.369195] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:46.721 17:05:35 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:26:46.721 17:05:35 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 132759 2 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132759 2 idle 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@33 -- # local pid=132759 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132759 -w 256 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132763 root 20 0 20.1t 144924 28924 S 0.0 1.2 0:00.54 reactor_2' 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@48 -- # echo 132763 root 20 0 20.1t 144924 28924 S 0.0 1.2 0:00.54 reactor_2 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:46.721 17:05:35 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:46.721 17:05:35 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:46.979 [2024-11-05 17:05:35.804459] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:46.980 [2024-11-05 17:05:35.804815] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:46.980 17:05:35 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:26:46.980 17:05:35 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:26:46.980 17:05:35 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:26:47.238 [2024-11-05 17:05:36.052998] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:47.238 17:05:36 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 132759 0 00:26:47.238 17:05:36 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132759 0 idle 00:26:47.238 17:05:36 -- interrupt/interrupt_common.sh@33 -- # local pid=132759 00:26:47.238 17:05:36 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:47.238 17:05:36 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:47.238 17:05:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:47.238 17:05:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:47.238 17:05:36 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:47.238 17:05:36 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:47.238 17:05:36 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:47.238 17:05:36 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132759 -w 256 00:26:47.238 17:05:36 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:47.496 17:05:36 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132759 root 20 0 20.1t 145016 28924 S 0.0 1.2 0:01.94 reactor_0' 00:26:47.496 17:05:36 -- interrupt/interrupt_common.sh@48 -- # echo 132759 root 20 0 20.1t 145016 28924 S 0.0 1.2 0:01.94 reactor_0 00:26:47.496 17:05:36 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:47.496 17:05:36 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:47.496 17:05:36 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:47.496 17:05:36 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:47.496 17:05:36 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:47.496 17:05:36 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:47.496 17:05:36 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:47.496 17:05:36 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:47.496 17:05:36 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:47.496 17:05:36 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:26:47.496 17:05:36 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:26:47.496 17:05:36 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 132759 00:26:47.496 17:05:36 -- common/autotest_common.sh@936 -- # '[' -z 132759 ']' 00:26:47.496 17:05:36 -- common/autotest_common.sh@940 -- # kill -0 132759 00:26:47.496 17:05:36 -- common/autotest_common.sh@941 -- # uname 00:26:47.496 17:05:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:47.496 17:05:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132759 00:26:47.496 17:05:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:47.496 17:05:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:47.496 17:05:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 132759' 00:26:47.496 killing process with pid 132759 00:26:47.496 17:05:36 -- common/autotest_common.sh@955 -- # kill 132759 00:26:47.496 17:05:36 -- common/autotest_common.sh@960 -- # wait 132759 00:26:48.871 17:05:37 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:26:48.871 17:05:37 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:48.871 17:05:37 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:26:48.871 17:05:37 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.871 17:05:37 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:48.871 17:05:37 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=132912 00:26:48.871 17:05:37 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:48.871 17:05:37 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:48.871 17:05:37 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 132912 /var/tmp/spdk.sock 00:26:48.871 17:05:37 -- common/autotest_common.sh@829 -- # '[' -z 132912 ']' 00:26:48.871 17:05:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.871 17:05:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:48.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.871 17:05:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.871 17:05:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:48.871 17:05:37 -- common/autotest_common.sh@10 -- # set +x 00:26:48.871 [2024-11-05 17:05:37.430075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:48.871 [2024-11-05 17:05:37.430514] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132912 ] 00:26:48.871 [2024-11-05 17:05:37.606818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:49.129 [2024-11-05 17:05:37.769700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.129 [2024-11-05 17:05:37.769846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.129 [2024-11-05 17:05:37.769844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:49.129 [2024-11-05 17:05:38.016869] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:49.695 17:05:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:49.695 17:05:38 -- common/autotest_common.sh@862 -- # return 0 00:26:49.695 17:05:38 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:26:49.695 17:05:38 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:49.964 Malloc0 00:26:49.964 Malloc1 00:26:49.964 Malloc2 00:26:49.964 17:05:38 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:26:49.964 17:05:38 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:49.964 17:05:38 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:49.964 17:05:38 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:49.964 5000+0 records in 00:26:49.964 5000+0 records out 00:26:49.964 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0265821 s, 385 MB/s 00:26:49.964 17:05:38 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:50.222 AIO0 00:26:50.222 17:05:39 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 132912 00:26:50.222 17:05:39 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 132912 00:26:50.222 17:05:39 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=132912 00:26:50.222 17:05:39 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:26:50.222 17:05:39 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:50.222 17:05:39 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:50.222 17:05:39 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:26:50.222 17:05:39 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:50.222 17:05:39 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:26:50.222 17:05:39 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:50.222 17:05:39 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:50.222 17:05:39 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:50.481 17:05:39 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:26:50.481 17:05:39 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:50.481 17:05:39 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:50.481 17:05:39 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:26:50.481 17:05:39 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:50.481 17:05:39 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:26:50.481 17:05:39 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:50.481 17:05:39 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:50.481 17:05:39 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:50.739 17:05:39 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:26:50.739 spdk_thread ids are 1 on reactor0. 00:26:50.739 17:05:39 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:50.739 17:05:39 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:50.739 17:05:39 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:50.739 17:05:39 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 132912 0 00:26:50.739 17:05:39 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132912 0 idle 00:26:50.739 17:05:39 -- interrupt/interrupt_common.sh@33 -- # local pid=132912 00:26:50.739 17:05:39 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:50.739 17:05:39 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:50.739 17:05:39 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:50.739 17:05:39 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:50.739 17:05:39 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:50.739 17:05:39 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:50.739 17:05:39 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:50.739 17:05:39 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132912 -w 256 00:26:50.739 17:05:39 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132912 root 20 0 20.1t 146064 28980 S 0.0 1.2 0:00.65 reactor_0' 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@48 -- # echo 132912 root 20 0 20.1t 146064 28980 S 0.0 1.2 0:00.65 reactor_0 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:50.998 17:05:39 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:50.998 17:05:39 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 132912 1 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132912 1 idle 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@33 -- # local pid=132912 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132912 -w 256 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132915 root 20 0 20.1t 146064 28980 S 0.0 1.2 0:00.00 reactor_1' 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@48 -- # echo 132915 root 20 0 20.1t 146064 28980 S 0.0 1.2 0:00.00 reactor_1 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:50.998 17:05:39 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:50.998 17:05:39 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 132912 2 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132912 2 idle 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@33 -- # local pid=132912 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:50.998 17:05:39 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:51.257 17:05:39 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132912 -w 256 00:26:51.257 17:05:39 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:51.257 17:05:40 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132916 root 20 0 20.1t 146064 28980 S 0.0 1.2 0:00.00 reactor_2' 00:26:51.257 17:05:40 -- interrupt/interrupt_common.sh@48 -- # echo 132916 root 20 0 20.1t 146064 28980 S 0.0 1.2 0:00.00 reactor_2 00:26:51.257 17:05:40 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:51.257 17:05:40 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:51.257 17:05:40 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:51.257 17:05:40 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:51.257 17:05:40 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:51.257 17:05:40 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:51.257 17:05:40 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:51.257 17:05:40 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:51.257 17:05:40 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:26:51.257 17:05:40 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:51.545 [2024-11-05 17:05:40.305189] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:51.545 [2024-11-05 17:05:40.305736] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:26:51.545 [2024-11-05 17:05:40.306139] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:51.545 17:05:40 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:51.806 [2024-11-05 17:05:40.553007] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:51.806 [2024-11-05 17:05:40.553560] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:51.806 17:05:40 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:51.806 17:05:40 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 132912 0 00:26:51.806 17:05:40 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 132912 0 busy 00:26:51.806 17:05:40 -- interrupt/interrupt_common.sh@33 -- # local pid=132912 00:26:51.806 17:05:40 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:51.806 17:05:40 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:51.806 17:05:40 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:51.807 17:05:40 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:51.807 17:05:40 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:51.807 17:05:40 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:51.807 17:05:40 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132912 -w 256 00:26:51.807 17:05:40 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:52.065 17:05:40 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132912 root 20 0 20.1t 146136 28980 R 99.9 1.2 0:01.09 reactor_0' 00:26:52.065 17:05:40 -- interrupt/interrupt_common.sh@48 -- # echo 132912 root 20 0 20.1t 146136 28980 R 99.9 1.2 0:01.09 reactor_0 00:26:52.065 17:05:40 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:52.065 17:05:40 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:52.065 17:05:40 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:52.065 17:05:40 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:52.065 17:05:40 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:52.065 17:05:40 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:52.065 17:05:40 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:52.065 17:05:40 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:52.065 17:05:40 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:52.065 17:05:40 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 132912 2 00:26:52.065 17:05:40 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 132912 2 busy 00:26:52.065 17:05:40 -- interrupt/interrupt_common.sh@33 -- # local pid=132912 00:26:52.065 17:05:40 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:52.065 17:05:40 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:52.065 17:05:40 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:52.066 17:05:40 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:52.066 17:05:40 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:52.066 17:05:40 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:52.066 17:05:40 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132912 -w 256 00:26:52.066 17:05:40 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:52.066 17:05:40 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132916 root 20 0 20.1t 146136 28980 R 99.9 1.2 0:00.34 reactor_2' 00:26:52.066 17:05:40 -- interrupt/interrupt_common.sh@48 -- # echo 132916 root 20 0 20.1t 146136 28980 R 99.9 1.2 0:00.34 reactor_2 00:26:52.066 17:05:40 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:52.066 17:05:40 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:52.066 17:05:40 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:52.066 17:05:40 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:52.066 17:05:40 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:52.066 17:05:40 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:52.066 17:05:40 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:52.066 17:05:40 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:52.066 17:05:40 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:52.325 [2024-11-05 17:05:41.093161] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:52.325 [2024-11-05 17:05:41.093551] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:52.325 17:05:41 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:26:52.325 17:05:41 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 132912 2 00:26:52.325 17:05:41 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132912 2 idle 00:26:52.325 17:05:41 -- interrupt/interrupt_common.sh@33 -- # local pid=132912 00:26:52.325 17:05:41 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:52.325 17:05:41 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:52.325 17:05:41 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:52.325 17:05:41 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:52.325 17:05:41 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:52.325 17:05:41 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:52.325 17:05:41 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:52.325 17:05:41 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132912 -w 256 00:26:52.325 17:05:41 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:52.584 17:05:41 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132916 root 20 0 20.1t 146200 28980 S 0.0 1.2 0:00.54 reactor_2' 00:26:52.584 17:05:41 -- interrupt/interrupt_common.sh@48 -- # echo 132916 root 20 0 20.1t 146200 28980 S 0.0 1.2 0:00.54 reactor_2 00:26:52.584 17:05:41 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:52.584 17:05:41 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:52.584 17:05:41 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:52.584 17:05:41 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:52.584 17:05:41 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:52.584 17:05:41 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:52.584 17:05:41 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:52.584 17:05:41 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:52.584 17:05:41 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:52.843 [2024-11-05 17:05:41.513224] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:52.843 [2024-11-05 17:05:41.513685] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:26:52.843 [2024-11-05 17:05:41.513858] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:52.843 17:05:41 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:26:52.843 17:05:41 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 132912 0 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132912 0 idle 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@33 -- # local pid=132912 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132912 -w 256 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132912 root 20 0 20.1t 146244 28980 S 6.7 1.2 0:01.88 reactor_0' 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@48 -- # echo 132912 root 20 0 20.1t 146244 28980 S 6.7 1.2 0:01.88 reactor_0 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:26:52.843 17:05:41 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:52.843 17:05:41 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:52.843 17:05:41 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:26:52.843 17:05:41 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:52.843 17:05:41 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 132912 00:26:52.843 17:05:41 -- common/autotest_common.sh@936 -- # '[' -z 132912 ']' 00:26:52.843 17:05:41 -- common/autotest_common.sh@940 -- # kill -0 132912 00:26:52.843 17:05:41 -- common/autotest_common.sh@941 -- # uname 00:26:52.843 17:05:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:52.843 17:05:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132912 00:26:52.843 17:05:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:52.843 17:05:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:52.843 17:05:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 132912' 00:26:52.843 killing process with pid 132912 00:26:52.843 17:05:41 -- common/autotest_common.sh@955 -- # kill 132912 00:26:52.843 17:05:41 -- common/autotest_common.sh@960 -- # wait 132912 00:26:54.220 17:05:42 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:26:54.220 17:05:42 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:54.220 ************************************ 00:26:54.220 END TEST reactor_set_interrupt 00:26:54.220 ************************************ 00:26:54.220 00:26:54.220 real 0m11.816s 00:26:54.220 user 0m11.858s 00:26:54.220 sys 0m1.623s 00:26:54.220 17:05:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:54.220 17:05:42 -- common/autotest_common.sh@10 -- # set +x 00:26:54.220 17:05:42 -- spdk/autotest.sh@187 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:54.220 17:05:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:54.220 17:05:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:54.220 17:05:42 -- common/autotest_common.sh@10 -- # set +x 00:26:54.220 ************************************ 00:26:54.220 START TEST reap_unregistered_poller 00:26:54.220 ************************************ 00:26:54.220 17:05:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:54.220 * Looking for test storage... 00:26:54.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:54.220 17:05:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:54.220 17:05:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:54.220 17:05:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:54.220 17:05:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:54.220 17:05:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:54.220 17:05:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:54.220 17:05:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:54.220 17:05:43 -- scripts/common.sh@335 -- # IFS=.-: 00:26:54.220 17:05:43 -- scripts/common.sh@335 -- # read -ra ver1 00:26:54.220 17:05:43 -- scripts/common.sh@336 -- # IFS=.-: 00:26:54.220 17:05:43 -- scripts/common.sh@336 -- # read -ra ver2 00:26:54.220 17:05:43 -- scripts/common.sh@337 -- # local 'op=<' 00:26:54.220 17:05:43 -- scripts/common.sh@339 -- # ver1_l=2 00:26:54.220 17:05:43 -- scripts/common.sh@340 -- # ver2_l=1 00:26:54.220 17:05:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:54.220 17:05:43 -- scripts/common.sh@343 -- # case "$op" in 00:26:54.220 17:05:43 -- scripts/common.sh@344 -- # : 1 00:26:54.220 17:05:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:54.220 17:05:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:54.220 17:05:43 -- scripts/common.sh@364 -- # decimal 1 00:26:54.220 17:05:43 -- scripts/common.sh@352 -- # local d=1 00:26:54.220 17:05:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:54.220 17:05:43 -- scripts/common.sh@354 -- # echo 1 00:26:54.220 17:05:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:54.220 17:05:43 -- scripts/common.sh@365 -- # decimal 2 00:26:54.220 17:05:43 -- scripts/common.sh@352 -- # local d=2 00:26:54.220 17:05:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:54.220 17:05:43 -- scripts/common.sh@354 -- # echo 2 00:26:54.220 17:05:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:54.220 17:05:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:54.220 17:05:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:54.220 17:05:43 -- scripts/common.sh@367 -- # return 0 00:26:54.220 17:05:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:54.220 17:05:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:54.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.220 --rc genhtml_branch_coverage=1 00:26:54.220 --rc genhtml_function_coverage=1 00:26:54.220 --rc genhtml_legend=1 00:26:54.220 --rc geninfo_all_blocks=1 00:26:54.220 --rc geninfo_unexecuted_blocks=1 00:26:54.220 00:26:54.220 ' 00:26:54.220 17:05:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:54.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.220 --rc genhtml_branch_coverage=1 00:26:54.220 --rc genhtml_function_coverage=1 00:26:54.220 --rc genhtml_legend=1 00:26:54.220 --rc geninfo_all_blocks=1 00:26:54.220 --rc geninfo_unexecuted_blocks=1 00:26:54.220 00:26:54.220 ' 00:26:54.220 17:05:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:54.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.220 --rc genhtml_branch_coverage=1 00:26:54.220 --rc genhtml_function_coverage=1 00:26:54.220 --rc genhtml_legend=1 00:26:54.220 --rc geninfo_all_blocks=1 00:26:54.220 --rc geninfo_unexecuted_blocks=1 00:26:54.220 00:26:54.220 ' 00:26:54.220 17:05:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:54.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.220 --rc genhtml_branch_coverage=1 00:26:54.220 --rc genhtml_function_coverage=1 00:26:54.220 --rc genhtml_legend=1 00:26:54.220 --rc geninfo_all_blocks=1 00:26:54.220 --rc geninfo_unexecuted_blocks=1 00:26:54.220 00:26:54.220 ' 00:26:54.220 17:05:43 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:26:54.220 17:05:43 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:54.220 17:05:43 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:54.220 17:05:43 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:54.220 17:05:43 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:26:54.220 17:05:43 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:54.220 17:05:43 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:54.220 17:05:43 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:54.220 17:05:43 -- common/autotest_common.sh@34 -- # set -e 00:26:54.220 17:05:43 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:54.220 17:05:43 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:54.220 17:05:43 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:54.220 17:05:43 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:54.220 17:05:43 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:26:54.220 17:05:43 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:26:54.220 17:05:43 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:26:54.220 17:05:43 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:54.220 17:05:43 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:26:54.220 17:05:43 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:26:54.220 17:05:43 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:26:54.220 17:05:43 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:26:54.220 17:05:43 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:26:54.220 17:05:43 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:26:54.220 17:05:43 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:26:54.220 17:05:43 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:26:54.220 17:05:43 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:26:54.220 17:05:43 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:26:54.220 17:05:43 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:54.220 17:05:43 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:26:54.220 17:05:43 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:26:54.220 17:05:43 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:54.220 17:05:43 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:54.220 17:05:43 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:26:54.220 17:05:43 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:26:54.220 17:05:43 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:26:54.221 17:05:43 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:54.221 17:05:43 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:26:54.221 17:05:43 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:26:54.221 17:05:43 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:26:54.221 17:05:43 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:54.221 17:05:43 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:26:54.221 17:05:43 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:26:54.221 17:05:43 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:26:54.221 17:05:43 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:26:54.221 17:05:43 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:26:54.221 17:05:43 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:26:54.221 17:05:43 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:26:54.221 17:05:43 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:26:54.221 17:05:43 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:26:54.221 17:05:43 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:26:54.221 17:05:43 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:26:54.221 17:05:43 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:26:54.221 17:05:43 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:26:54.221 17:05:43 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:26:54.221 17:05:43 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:26:54.221 17:05:43 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:26:54.221 17:05:43 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:54.221 17:05:43 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:26:54.221 17:05:43 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:26:54.221 17:05:43 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:26:54.221 17:05:43 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:54.221 17:05:43 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:26:54.221 17:05:43 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:26:54.221 17:05:43 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:26:54.221 17:05:43 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:26:54.221 17:05:43 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:26:54.221 17:05:43 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:26:54.221 17:05:43 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:26:54.221 17:05:43 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:26:54.221 17:05:43 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:26:54.221 17:05:43 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:26:54.221 17:05:43 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:26:54.221 17:05:43 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:26:54.221 17:05:43 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:26:54.221 17:05:43 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:26:54.221 17:05:43 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:26:54.221 17:05:43 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:26:54.221 17:05:43 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:26:54.221 17:05:43 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:54.221 17:05:43 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:26:54.221 17:05:43 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:26:54.221 17:05:43 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:26:54.221 17:05:43 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:26:54.221 17:05:43 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:26:54.221 17:05:43 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:26:54.221 17:05:43 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:26:54.221 17:05:43 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:26:54.221 17:05:43 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:26:54.221 17:05:43 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:26:54.221 17:05:43 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:54.221 17:05:43 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:26:54.221 17:05:43 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:26:54.221 17:05:43 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:54.221 17:05:43 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:54.481 17:05:43 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:54.481 17:05:43 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:54.481 17:05:43 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:54.481 17:05:43 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:54.481 17:05:43 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:54.481 17:05:43 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:54.481 17:05:43 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:54.481 17:05:43 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:54.481 17:05:43 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:54.481 17:05:43 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:54.481 17:05:43 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:54.481 17:05:43 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:54.481 17:05:43 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:54.481 17:05:43 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:54.481 #define SPDK_CONFIG_H 00:26:54.481 #define SPDK_CONFIG_APPS 1 00:26:54.481 #define SPDK_CONFIG_ARCH native 00:26:54.481 #define SPDK_CONFIG_ASAN 1 00:26:54.481 #undef SPDK_CONFIG_AVAHI 00:26:54.481 #undef SPDK_CONFIG_CET 00:26:54.481 #define SPDK_CONFIG_COVERAGE 1 00:26:54.481 #define SPDK_CONFIG_CROSS_PREFIX 00:26:54.481 #undef SPDK_CONFIG_CRYPTO 00:26:54.481 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:54.481 #undef SPDK_CONFIG_CUSTOMOCF 00:26:54.481 #undef SPDK_CONFIG_DAOS 00:26:54.481 #define SPDK_CONFIG_DAOS_DIR 00:26:54.481 #define SPDK_CONFIG_DEBUG 1 00:26:54.481 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:54.481 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:54.481 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:54.481 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:54.481 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:54.481 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:54.481 #define SPDK_CONFIG_EXAMPLES 1 00:26:54.481 #undef SPDK_CONFIG_FC 00:26:54.481 #define SPDK_CONFIG_FC_PATH 00:26:54.481 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:54.481 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:54.481 #undef SPDK_CONFIG_FUSE 00:26:54.481 #undef SPDK_CONFIG_FUZZER 00:26:54.481 #define SPDK_CONFIG_FUZZER_LIB 00:26:54.481 #undef SPDK_CONFIG_GOLANG 00:26:54.481 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:26:54.481 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:54.481 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:54.481 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:54.481 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:54.481 #define SPDK_CONFIG_IDXD 1 00:26:54.481 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:54.481 #undef SPDK_CONFIG_IPSEC_MB 00:26:54.481 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:54.481 #define SPDK_CONFIG_ISAL 1 00:26:54.481 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:54.481 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:54.481 #define SPDK_CONFIG_LIBDIR 00:26:54.481 #undef SPDK_CONFIG_LTO 00:26:54.481 #define SPDK_CONFIG_MAX_LCORES 00:26:54.481 #define SPDK_CONFIG_NVME_CUSE 1 00:26:54.481 #undef SPDK_CONFIG_OCF 00:26:54.481 #define SPDK_CONFIG_OCF_PATH 00:26:54.481 #define SPDK_CONFIG_OPENSSL_PATH 00:26:54.481 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:54.481 #undef SPDK_CONFIG_PGO_USE 00:26:54.481 #define SPDK_CONFIG_PREFIX /usr/local 00:26:54.481 #define SPDK_CONFIG_RAID5F 1 00:26:54.481 #undef SPDK_CONFIG_RBD 00:26:54.481 #define SPDK_CONFIG_RDMA 1 00:26:54.481 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:54.481 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:54.481 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:54.481 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:54.481 #undef SPDK_CONFIG_SHARED 00:26:54.481 #undef SPDK_CONFIG_SMA 00:26:54.481 #define SPDK_CONFIG_TESTS 1 00:26:54.481 #undef SPDK_CONFIG_TSAN 00:26:54.481 #undef SPDK_CONFIG_UBLK 00:26:54.481 #define SPDK_CONFIG_UBSAN 1 00:26:54.481 #define SPDK_CONFIG_UNIT_TESTS 1 00:26:54.481 #undef SPDK_CONFIG_URING 00:26:54.481 #define SPDK_CONFIG_URING_PATH 00:26:54.481 #undef SPDK_CONFIG_URING_ZNS 00:26:54.481 #undef SPDK_CONFIG_USDT 00:26:54.481 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:54.481 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:54.481 #undef SPDK_CONFIG_VFIO_USER 00:26:54.481 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:54.481 #define SPDK_CONFIG_VHOST 1 00:26:54.481 #define SPDK_CONFIG_VIRTIO 1 00:26:54.481 #undef SPDK_CONFIG_VTUNE 00:26:54.481 #define SPDK_CONFIG_VTUNE_DIR 00:26:54.481 #define SPDK_CONFIG_WERROR 1 00:26:54.481 #define SPDK_CONFIG_WPDK_DIR 00:26:54.481 #undef SPDK_CONFIG_XNVME 00:26:54.481 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:54.481 17:05:43 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:54.481 17:05:43 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:54.481 17:05:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.481 17:05:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.481 17:05:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.481 17:05:43 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:54.482 17:05:43 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:54.482 17:05:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:54.482 17:05:43 -- paths/export.sh@5 -- # export PATH 00:26:54.482 17:05:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:54.482 17:05:43 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:54.482 17:05:43 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:54.482 17:05:43 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:54.482 17:05:43 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:54.482 17:05:43 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:54.482 17:05:43 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:54.482 17:05:43 -- pm/common@16 -- # TEST_TAG=N/A 00:26:54.482 17:05:43 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:54.482 17:05:43 -- common/autotest_common.sh@52 -- # : 1 00:26:54.482 17:05:43 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:26:54.482 17:05:43 -- common/autotest_common.sh@56 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:54.482 17:05:43 -- common/autotest_common.sh@58 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:26:54.482 17:05:43 -- common/autotest_common.sh@60 -- # : 1 00:26:54.482 17:05:43 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:54.482 17:05:43 -- common/autotest_common.sh@62 -- # : 1 00:26:54.482 17:05:43 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:26:54.482 17:05:43 -- common/autotest_common.sh@64 -- # : 00:26:54.482 17:05:43 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:26:54.482 17:05:43 -- common/autotest_common.sh@66 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:26:54.482 17:05:43 -- common/autotest_common.sh@68 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:26:54.482 17:05:43 -- common/autotest_common.sh@70 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:26:54.482 17:05:43 -- common/autotest_common.sh@72 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:54.482 17:05:43 -- common/autotest_common.sh@74 -- # : 1 00:26:54.482 17:05:43 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:26:54.482 17:05:43 -- common/autotest_common.sh@76 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:26:54.482 17:05:43 -- common/autotest_common.sh@78 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:26:54.482 17:05:43 -- common/autotest_common.sh@80 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:26:54.482 17:05:43 -- common/autotest_common.sh@82 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:26:54.482 17:05:43 -- common/autotest_common.sh@84 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:26:54.482 17:05:43 -- common/autotest_common.sh@86 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:26:54.482 17:05:43 -- common/autotest_common.sh@88 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:26:54.482 17:05:43 -- common/autotest_common.sh@90 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:54.482 17:05:43 -- common/autotest_common.sh@92 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:26:54.482 17:05:43 -- common/autotest_common.sh@94 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:26:54.482 17:05:43 -- common/autotest_common.sh@96 -- # : rdma 00:26:54.482 17:05:43 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:54.482 17:05:43 -- common/autotest_common.sh@98 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:26:54.482 17:05:43 -- common/autotest_common.sh@100 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:26:54.482 17:05:43 -- common/autotest_common.sh@102 -- # : 1 00:26:54.482 17:05:43 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:26:54.482 17:05:43 -- common/autotest_common.sh@104 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:26:54.482 17:05:43 -- common/autotest_common.sh@106 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:26:54.482 17:05:43 -- common/autotest_common.sh@108 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:26:54.482 17:05:43 -- common/autotest_common.sh@110 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:26:54.482 17:05:43 -- common/autotest_common.sh@112 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:54.482 17:05:43 -- common/autotest_common.sh@114 -- # : 1 00:26:54.482 17:05:43 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:26:54.482 17:05:43 -- common/autotest_common.sh@116 -- # : 1 00:26:54.482 17:05:43 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:26:54.482 17:05:43 -- common/autotest_common.sh@118 -- # : 00:26:54.482 17:05:43 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:54.482 17:05:43 -- common/autotest_common.sh@120 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:26:54.482 17:05:43 -- common/autotest_common.sh@122 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:26:54.482 17:05:43 -- common/autotest_common.sh@124 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:26:54.482 17:05:43 -- common/autotest_common.sh@126 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:26:54.482 17:05:43 -- common/autotest_common.sh@128 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:26:54.482 17:05:43 -- common/autotest_common.sh@130 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:26:54.482 17:05:43 -- common/autotest_common.sh@132 -- # : 00:26:54.482 17:05:43 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:26:54.482 17:05:43 -- common/autotest_common.sh@134 -- # : true 00:26:54.482 17:05:43 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:26:54.482 17:05:43 -- common/autotest_common.sh@136 -- # : 1 00:26:54.482 17:05:43 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:26:54.482 17:05:43 -- common/autotest_common.sh@138 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:26:54.482 17:05:43 -- common/autotest_common.sh@140 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:26:54.482 17:05:43 -- common/autotest_common.sh@142 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:26:54.482 17:05:43 -- common/autotest_common.sh@144 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:26:54.482 17:05:43 -- common/autotest_common.sh@146 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:26:54.482 17:05:43 -- common/autotest_common.sh@148 -- # : 00:26:54.482 17:05:43 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:26:54.482 17:05:43 -- common/autotest_common.sh@150 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:26:54.482 17:05:43 -- common/autotest_common.sh@152 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:26:54.482 17:05:43 -- common/autotest_common.sh@154 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:26:54.482 17:05:43 -- common/autotest_common.sh@156 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:26:54.482 17:05:43 -- common/autotest_common.sh@158 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:26:54.482 17:05:43 -- common/autotest_common.sh@160 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:26:54.482 17:05:43 -- common/autotest_common.sh@163 -- # : 00:26:54.482 17:05:43 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:26:54.482 17:05:43 -- common/autotest_common.sh@165 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:26:54.482 17:05:43 -- common/autotest_common.sh@167 -- # : 0 00:26:54.482 17:05:43 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:54.482 17:05:43 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:54.482 17:05:43 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:54.482 17:05:43 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:54.482 17:05:43 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:54.482 17:05:43 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:54.482 17:05:43 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:54.482 17:05:43 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:54.483 17:05:43 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:54.483 17:05:43 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:54.483 17:05:43 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:54.483 17:05:43 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:54.483 17:05:43 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:54.483 17:05:43 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:54.483 17:05:43 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:26:54.483 17:05:43 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:54.483 17:05:43 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:54.483 17:05:43 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:54.483 17:05:43 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:54.483 17:05:43 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:54.483 17:05:43 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:26:54.483 17:05:43 -- common/autotest_common.sh@196 -- # cat 00:26:54.483 17:05:43 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:26:54.483 17:05:43 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:54.483 17:05:43 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:54.483 17:05:43 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:54.483 17:05:43 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:54.483 17:05:43 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:26:54.483 17:05:43 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:26:54.483 17:05:43 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:54.483 17:05:43 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:54.483 17:05:43 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:54.483 17:05:43 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:54.483 17:05:43 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:26:54.483 17:05:43 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:26:54.483 17:05:43 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:54.483 17:05:43 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:54.483 17:05:43 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:54.483 17:05:43 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:54.483 17:05:43 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:54.483 17:05:43 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:54.483 17:05:43 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:26:54.483 17:05:43 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:26:54.483 17:05:43 -- common/autotest_common.sh@249 -- # _LCOV= 00:26:54.483 17:05:43 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:26:54.483 17:05:43 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:26:54.483 17:05:43 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:26:54.483 17:05:43 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:26:54.483 17:05:43 -- common/autotest_common.sh@255 -- # lcov_opt= 00:26:54.483 17:05:43 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:26:54.483 17:05:43 -- common/autotest_common.sh@259 -- # export valgrind= 00:26:54.483 17:05:43 -- common/autotest_common.sh@259 -- # valgrind= 00:26:54.483 17:05:43 -- common/autotest_common.sh@265 -- # uname -s 00:26:54.483 17:05:43 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:26:54.483 17:05:43 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:26:54.483 17:05:43 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:26:54.483 17:05:43 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:26:54.483 17:05:43 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:26:54.483 17:05:43 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:26:54.483 17:05:43 -- common/autotest_common.sh@275 -- # MAKE=make 00:26:54.483 17:05:43 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:26:54.483 17:05:43 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:26:54.483 17:05:43 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:26:54.483 17:05:43 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:54.483 17:05:43 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:26:54.483 17:05:43 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:26:54.483 17:05:43 -- common/autotest_common.sh@319 -- # [[ -z 133082 ]] 00:26:54.483 17:05:43 -- common/autotest_common.sh@319 -- # kill -0 133082 00:26:54.483 17:05:43 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:26:54.483 17:05:43 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:26:54.483 17:05:43 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:26:54.483 17:05:43 -- common/autotest_common.sh@332 -- # local mount target_dir 00:26:54.483 17:05:43 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:26:54.483 17:05:43 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:26:54.483 17:05:43 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:26:54.483 17:05:43 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:26:54.483 17:05:43 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.OhlwBR 00:26:54.483 17:05:43 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:54.483 17:05:43 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:26:54.483 17:05:43 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:26:54.483 17:05:43 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.OhlwBR/tests/interrupt /tmp/spdk.OhlwBR 00:26:54.483 17:05:43 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:26:54.483 17:05:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:54.483 17:05:43 -- common/autotest_common.sh@328 -- # df -T 00:26:54.483 17:05:43 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:26:54.483 17:05:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:26:54.483 17:05:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:26:54.483 17:05:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=1248956416 00:26:54.483 17:05:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253683200 00:26:54.483 17:05:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=4726784 00:26:54.483 17:05:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:54.483 17:05:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1 00:26:54.483 17:05:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:26:54.483 17:05:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=10293649408 00:26:54.483 17:05:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20616794112 00:26:54.483 17:05:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=10306367488 00:26:54.483 17:05:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:54.483 17:05:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:26:54.483 17:05:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:26:54.483 17:05:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265810944 00:26:54.483 17:05:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6268403712 00:26:54.483 17:05:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=2592768 00:26:54.483 17:05:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:54.483 17:05:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:26:54.483 17:05:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:26:54.483 17:05:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=5242880 00:26:54.483 17:05:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880 00:26:54.483 17:05:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:26:54.483 17:05:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:54.483 17:05:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15 00:26:54.483 17:05:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:26:54.483 17:05:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=103061504 00:26:54.483 17:05:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968 00:26:54.483 17:05:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=6334464 00:26:54.483 17:05:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:54.483 17:05:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:26:54.483 17:05:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:26:54.483 17:05:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253675008 00:26:54.483 17:05:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253679104 00:26:54.483 17:05:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=4096 00:26:54.483 17:05:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:54.483 17:05:43 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output 00:26:54.483 17:05:43 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:26:54.483 17:05:43 -- common/autotest_common.sh@363 -- # avails["$mount"]=98692825088 00:26:54.483 17:05:43 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:26:54.483 17:05:43 -- common/autotest_common.sh@364 -- # uses["$mount"]=1009954816 00:26:54.483 17:05:43 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:26:54.483 17:05:43 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:26:54.483 * Looking for test storage... 00:26:54.483 17:05:43 -- common/autotest_common.sh@369 -- # local target_space new_size 00:26:54.483 17:05:43 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:26:54.483 17:05:43 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:54.483 17:05:43 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:54.483 17:05:43 -- common/autotest_common.sh@373 -- # mount=/ 00:26:54.483 17:05:43 -- common/autotest_common.sh@375 -- # target_space=10293649408 00:26:54.483 17:05:43 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:26:54.483 17:05:43 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:26:54.484 17:05:43 -- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]] 00:26:54.484 17:05:43 -- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]] 00:26:54.484 17:05:43 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:26:54.484 17:05:43 -- common/autotest_common.sh@382 -- # new_size=12520960000 00:26:54.484 17:05:43 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:54.484 17:05:43 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:54.484 17:05:43 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:54.484 17:05:43 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:54.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:54.484 17:05:43 -- common/autotest_common.sh@390 -- # return 0 00:26:54.484 17:05:43 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:26:54.484 17:05:43 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:26:54.484 17:05:43 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:54.484 17:05:43 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:54.484 17:05:43 -- common/autotest_common.sh@1682 -- # true 00:26:54.484 17:05:43 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:26:54.484 17:05:43 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:54.484 17:05:43 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:54.484 17:05:43 -- common/autotest_common.sh@27 -- # exec 00:26:54.484 17:05:43 -- common/autotest_common.sh@29 -- # exec 00:26:54.484 17:05:43 -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:54.484 17:05:43 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:54.484 17:05:43 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:54.484 17:05:43 -- common/autotest_common.sh@18 -- # set -x 00:26:54.484 17:05:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:54.484 17:05:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:54.484 17:05:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:54.484 17:05:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:54.484 17:05:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:54.484 17:05:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:54.484 17:05:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:54.484 17:05:43 -- scripts/common.sh@335 -- # IFS=.-: 00:26:54.484 17:05:43 -- scripts/common.sh@335 -- # read -ra ver1 00:26:54.484 17:05:43 -- scripts/common.sh@336 -- # IFS=.-: 00:26:54.484 17:05:43 -- scripts/common.sh@336 -- # read -ra ver2 00:26:54.484 17:05:43 -- scripts/common.sh@337 -- # local 'op=<' 00:26:54.484 17:05:43 -- scripts/common.sh@339 -- # ver1_l=2 00:26:54.484 17:05:43 -- scripts/common.sh@340 -- # ver2_l=1 00:26:54.484 17:05:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:54.484 17:05:43 -- scripts/common.sh@343 -- # case "$op" in 00:26:54.484 17:05:43 -- scripts/common.sh@344 -- # : 1 00:26:54.484 17:05:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:54.484 17:05:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:54.484 17:05:43 -- scripts/common.sh@364 -- # decimal 1 00:26:54.484 17:05:43 -- scripts/common.sh@352 -- # local d=1 00:26:54.484 17:05:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:54.484 17:05:43 -- scripts/common.sh@354 -- # echo 1 00:26:54.484 17:05:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:54.484 17:05:43 -- scripts/common.sh@365 -- # decimal 2 00:26:54.484 17:05:43 -- scripts/common.sh@352 -- # local d=2 00:26:54.484 17:05:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:54.484 17:05:43 -- scripts/common.sh@354 -- # echo 2 00:26:54.484 17:05:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:54.484 17:05:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:54.484 17:05:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:54.484 17:05:43 -- scripts/common.sh@367 -- # return 0 00:26:54.484 17:05:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:54.484 17:05:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:54.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.484 --rc genhtml_branch_coverage=1 00:26:54.484 --rc genhtml_function_coverage=1 00:26:54.484 --rc genhtml_legend=1 00:26:54.484 --rc geninfo_all_blocks=1 00:26:54.484 --rc geninfo_unexecuted_blocks=1 00:26:54.484 00:26:54.484 ' 00:26:54.484 17:05:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:54.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.484 --rc genhtml_branch_coverage=1 00:26:54.484 --rc genhtml_function_coverage=1 00:26:54.484 --rc genhtml_legend=1 00:26:54.484 --rc geninfo_all_blocks=1 00:26:54.484 --rc geninfo_unexecuted_blocks=1 00:26:54.484 00:26:54.484 ' 00:26:54.484 17:05:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:54.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.484 --rc genhtml_branch_coverage=1 00:26:54.484 --rc genhtml_function_coverage=1 00:26:54.484 --rc genhtml_legend=1 00:26:54.484 --rc geninfo_all_blocks=1 00:26:54.484 --rc geninfo_unexecuted_blocks=1 00:26:54.484 00:26:54.484 ' 00:26:54.484 17:05:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:54.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.484 --rc genhtml_branch_coverage=1 00:26:54.484 --rc genhtml_function_coverage=1 00:26:54.484 --rc genhtml_legend=1 00:26:54.484 --rc geninfo_all_blocks=1 00:26:54.484 --rc geninfo_unexecuted_blocks=1 00:26:54.484 00:26:54.484 ' 00:26:54.484 17:05:43 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:54.484 17:05:43 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:26:54.484 17:05:43 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:26:54.484 17:05:43 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:26:54.484 17:05:43 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:26:54.484 17:05:43 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:54.484 17:05:43 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:54.484 17:05:43 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:54.484 17:05:43 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:26:54.484 17:05:43 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.484 17:05:43 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:54.484 17:05:43 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=133153 00:26:54.484 17:05:43 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:54.484 17:05:43 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:54.484 17:05:43 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 133153 /var/tmp/spdk.sock 00:26:54.484 17:05:43 -- common/autotest_common.sh@829 -- # '[' -z 133153 ']' 00:26:54.484 17:05:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.484 17:05:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:54.484 17:05:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.484 17:05:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:54.484 17:05:43 -- common/autotest_common.sh@10 -- # set +x 00:26:54.742 [2024-11-05 17:05:43.398599] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:54.742 [2024-11-05 17:05:43.399552] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133153 ] 00:26:54.742 [2024-11-05 17:05:43.576910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:55.001 [2024-11-05 17:05:43.740690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.001 [2024-11-05 17:05:43.740833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.001 [2024-11-05 17:05:43.740832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.259 [2024-11-05 17:05:43.986170] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:55.518 17:05:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.518 17:05:44 -- common/autotest_common.sh@862 -- # return 0 00:26:55.518 17:05:44 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:26:55.518 17:05:44 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:26:55.518 17:05:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.518 17:05:44 -- common/autotest_common.sh@10 -- # set +x 00:26:55.518 17:05:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.518 17:05:44 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:26:55.518 "name": "app_thread", 00:26:55.518 "id": 1, 00:26:55.518 "active_pollers": [], 00:26:55.518 "timed_pollers": [ 00:26:55.518 { 00:26:55.518 "name": "rpc_subsystem_poll", 00:26:55.518 "id": 1, 00:26:55.518 "state": "waiting", 00:26:55.518 "run_count": 0, 00:26:55.518 "busy_count": 0, 00:26:55.518 "period_ticks": 8800000 00:26:55.518 } 00:26:55.518 ], 00:26:55.518 "paused_pollers": [] 00:26:55.518 }' 00:26:55.518 17:05:44 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:26:55.776 17:05:44 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:26:55.776 17:05:44 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:26:55.776 17:05:44 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:26:55.776 17:05:44 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:26:55.776 17:05:44 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:26:55.776 17:05:44 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:55.776 17:05:44 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:55.776 17:05:44 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:55.776 5000+0 records in 00:26:55.776 5000+0 records out 00:26:55.776 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0220379 s, 465 MB/s 00:26:55.776 17:05:44 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:56.035 AIO0 00:26:56.035 17:05:44 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:56.294 17:05:45 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:26:56.294 17:05:45 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:26:56.294 17:05:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.294 17:05:45 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:26:56.294 17:05:45 -- common/autotest_common.sh@10 -- # set +x 00:26:56.294 17:05:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.552 17:05:45 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:26:56.552 "name": "app_thread", 00:26:56.552 "id": 1, 00:26:56.552 "active_pollers": [], 00:26:56.552 "timed_pollers": [ 00:26:56.552 { 00:26:56.552 "name": "rpc_subsystem_poll", 00:26:56.552 "id": 1, 00:26:56.552 "state": "waiting", 00:26:56.552 "run_count": 0, 00:26:56.552 "busy_count": 0, 00:26:56.552 "period_ticks": 8800000 00:26:56.552 } 00:26:56.552 ], 00:26:56.552 "paused_pollers": [] 00:26:56.552 }' 00:26:56.552 17:05:45 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:26:56.552 17:05:45 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:26:56.552 17:05:45 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:26:56.552 17:05:45 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:26:56.552 17:05:45 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:26:56.552 17:05:45 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:26:56.552 17:05:45 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:26:56.552 17:05:45 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 133153 00:26:56.552 17:05:45 -- common/autotest_common.sh@936 -- # '[' -z 133153 ']' 00:26:56.552 17:05:45 -- common/autotest_common.sh@940 -- # kill -0 133153 00:26:56.552 17:05:45 -- common/autotest_common.sh@941 -- # uname 00:26:56.552 17:05:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:56.552 17:05:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 133153 00:26:56.552 17:05:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:56.552 17:05:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:56.552 17:05:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 133153' 00:26:56.552 killing process with pid 133153 00:26:56.552 17:05:45 -- common/autotest_common.sh@955 -- # kill 133153 00:26:56.552 17:05:45 -- common/autotest_common.sh@960 -- # wait 133153 00:26:57.486 17:05:46 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:26:57.486 17:05:46 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:57.486 ************************************ 00:26:57.486 END TEST reap_unregistered_poller 00:26:57.487 ************************************ 00:26:57.487 00:26:57.487 real 0m3.464s 00:26:57.487 user 0m2.766s 00:26:57.487 sys 0m0.601s 00:26:57.487 17:05:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:57.487 17:05:46 -- common/autotest_common.sh@10 -- # set +x 00:26:57.745 17:05:46 -- spdk/autotest.sh@191 -- # uname -s 00:26:57.745 17:05:46 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:26:57.745 17:05:46 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:26:57.745 17:05:46 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:26:57.745 17:05:46 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:26:57.745 17:05:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:57.745 17:05:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:57.745 17:05:46 -- common/autotest_common.sh@10 -- # set +x 00:26:57.745 ************************************ 00:26:57.745 START TEST spdk_dd 00:26:57.745 ************************************ 00:26:57.745 17:05:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:26:57.745 * Looking for test storage... 00:26:57.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:57.745 17:05:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:57.745 17:05:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:57.745 17:05:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:57.745 17:05:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:57.745 17:05:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:57.745 17:05:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:57.745 17:05:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:57.745 17:05:46 -- scripts/common.sh@335 -- # IFS=.-: 00:26:57.745 17:05:46 -- scripts/common.sh@335 -- # read -ra ver1 00:26:57.745 17:05:46 -- scripts/common.sh@336 -- # IFS=.-: 00:26:57.745 17:05:46 -- scripts/common.sh@336 -- # read -ra ver2 00:26:57.745 17:05:46 -- scripts/common.sh@337 -- # local 'op=<' 00:26:57.745 17:05:46 -- scripts/common.sh@339 -- # ver1_l=2 00:26:57.745 17:05:46 -- scripts/common.sh@340 -- # ver2_l=1 00:26:57.745 17:05:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:57.745 17:05:46 -- scripts/common.sh@343 -- # case "$op" in 00:26:57.745 17:05:46 -- scripts/common.sh@344 -- # : 1 00:26:57.745 17:05:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:57.745 17:05:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:57.745 17:05:46 -- scripts/common.sh@364 -- # decimal 1 00:26:57.745 17:05:46 -- scripts/common.sh@352 -- # local d=1 00:26:57.745 17:05:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:57.745 17:05:46 -- scripts/common.sh@354 -- # echo 1 00:26:57.745 17:05:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:57.745 17:05:46 -- scripts/common.sh@365 -- # decimal 2 00:26:57.745 17:05:46 -- scripts/common.sh@352 -- # local d=2 00:26:57.745 17:05:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:57.745 17:05:46 -- scripts/common.sh@354 -- # echo 2 00:26:57.745 17:05:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:57.745 17:05:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:57.745 17:05:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:57.745 17:05:46 -- scripts/common.sh@367 -- # return 0 00:26:57.746 17:05:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:57.746 17:05:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:57.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.746 --rc genhtml_branch_coverage=1 00:26:57.746 --rc genhtml_function_coverage=1 00:26:57.746 --rc genhtml_legend=1 00:26:57.746 --rc geninfo_all_blocks=1 00:26:57.746 --rc geninfo_unexecuted_blocks=1 00:26:57.746 00:26:57.746 ' 00:26:57.746 17:05:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:57.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.746 --rc genhtml_branch_coverage=1 00:26:57.746 --rc genhtml_function_coverage=1 00:26:57.746 --rc genhtml_legend=1 00:26:57.746 --rc geninfo_all_blocks=1 00:26:57.746 --rc geninfo_unexecuted_blocks=1 00:26:57.746 00:26:57.746 ' 00:26:57.746 17:05:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:57.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.746 --rc genhtml_branch_coverage=1 00:26:57.746 --rc genhtml_function_coverage=1 00:26:57.746 --rc genhtml_legend=1 00:26:57.746 --rc geninfo_all_blocks=1 00:26:57.746 --rc geninfo_unexecuted_blocks=1 00:26:57.746 00:26:57.746 ' 00:26:57.746 17:05:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:57.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.746 --rc genhtml_branch_coverage=1 00:26:57.746 --rc genhtml_function_coverage=1 00:26:57.746 --rc genhtml_legend=1 00:26:57.746 --rc geninfo_all_blocks=1 00:26:57.746 --rc geninfo_unexecuted_blocks=1 00:26:57.746 00:26:57.746 ' 00:26:57.746 17:05:46 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:57.746 17:05:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.746 17:05:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.746 17:05:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.746 17:05:46 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:57.746 17:05:46 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:57.746 17:05:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:57.746 17:05:46 -- paths/export.sh@5 -- # export PATH 00:26:57.746 17:05:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:57.746 17:05:46 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:58.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:26:58.263 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:59.641 17:05:48 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:26:59.641 17:05:48 -- dd/dd.sh@11 -- # nvme_in_userspace 00:26:59.642 17:05:48 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:59.642 17:05:48 -- scripts/common.sh@312 -- # local nvmes 00:26:59.642 17:05:48 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:59.642 17:05:48 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:59.642 17:05:48 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:59.642 17:05:48 -- scripts/common.sh@297 -- # local bdf= 00:26:59.642 17:05:48 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:59.642 17:05:48 -- scripts/common.sh@232 -- # local class 00:26:59.642 17:05:48 -- scripts/common.sh@233 -- # local subclass 00:26:59.642 17:05:48 -- scripts/common.sh@234 -- # local progif 00:26:59.642 17:05:48 -- scripts/common.sh@235 -- # printf %02x 1 00:26:59.642 17:05:48 -- scripts/common.sh@235 -- # class=01 00:26:59.642 17:05:48 -- scripts/common.sh@236 -- # printf %02x 8 00:26:59.642 17:05:48 -- scripts/common.sh@236 -- # subclass=08 00:26:59.642 17:05:48 -- scripts/common.sh@237 -- # printf %02x 2 00:26:59.642 17:05:48 -- scripts/common.sh@237 -- # progif=02 00:26:59.642 17:05:48 -- scripts/common.sh@239 -- # hash lspci 00:26:59.642 17:05:48 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:59.642 17:05:48 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:59.642 17:05:48 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:59.642 17:05:48 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:59.642 17:05:48 -- scripts/common.sh@244 -- # tr -d '"' 00:26:59.642 17:05:48 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:59.642 17:05:48 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:59.642 17:05:48 -- scripts/common.sh@15 -- # local i 00:26:59.642 17:05:48 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:59.642 17:05:48 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:59.642 17:05:48 -- scripts/common.sh@24 -- # return 0 00:26:59.642 17:05:48 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:59.642 17:05:48 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:59.642 17:05:48 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:59.642 17:05:48 -- scripts/common.sh@322 -- # uname -s 00:26:59.642 17:05:48 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:59.642 17:05:48 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:59.642 17:05:48 -- scripts/common.sh@327 -- # (( 1 )) 00:26:59.642 17:05:48 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:26:59.642 17:05:48 -- dd/dd.sh@13 -- # check_liburing 00:26:59.642 17:05:48 -- dd/common.sh@139 -- # local lib so 00:26:59.642 17:05:48 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:26:59.642 17:05:48 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:26:59.642 17:05:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:59.642 17:05:48 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:26:59.642 17:05:48 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:26:59.642 17:05:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:59.642 17:05:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:59.642 17:05:48 -- common/autotest_common.sh@10 -- # set +x 00:26:59.642 ************************************ 00:26:59.642 START TEST spdk_dd_basic_rw 00:26:59.642 ************************************ 00:26:59.642 17:05:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:26:59.642 * Looking for test storage... 00:26:59.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:59.642 17:05:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:59.642 17:05:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:59.642 17:05:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:59.904 17:05:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:59.904 17:05:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:59.904 17:05:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:59.904 17:05:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:59.904 17:05:48 -- scripts/common.sh@335 -- # IFS=.-: 00:26:59.904 17:05:48 -- scripts/common.sh@335 -- # read -ra ver1 00:26:59.904 17:05:48 -- scripts/common.sh@336 -- # IFS=.-: 00:26:59.904 17:05:48 -- scripts/common.sh@336 -- # read -ra ver2 00:26:59.904 17:05:48 -- scripts/common.sh@337 -- # local 'op=<' 00:26:59.904 17:05:48 -- scripts/common.sh@339 -- # ver1_l=2 00:26:59.904 17:05:48 -- scripts/common.sh@340 -- # ver2_l=1 00:26:59.904 17:05:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:59.904 17:05:48 -- scripts/common.sh@343 -- # case "$op" in 00:26:59.904 17:05:48 -- scripts/common.sh@344 -- # : 1 00:26:59.904 17:05:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:59.904 17:05:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:59.904 17:05:48 -- scripts/common.sh@364 -- # decimal 1 00:26:59.904 17:05:48 -- scripts/common.sh@352 -- # local d=1 00:26:59.904 17:05:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:59.904 17:05:48 -- scripts/common.sh@354 -- # echo 1 00:26:59.904 17:05:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:59.904 17:05:48 -- scripts/common.sh@365 -- # decimal 2 00:26:59.904 17:05:48 -- scripts/common.sh@352 -- # local d=2 00:26:59.904 17:05:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:59.904 17:05:48 -- scripts/common.sh@354 -- # echo 2 00:26:59.904 17:05:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:59.904 17:05:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:59.904 17:05:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:59.904 17:05:48 -- scripts/common.sh@367 -- # return 0 00:26:59.904 17:05:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:59.904 17:05:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:59.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.904 --rc genhtml_branch_coverage=1 00:26:59.904 --rc genhtml_function_coverage=1 00:26:59.904 --rc genhtml_legend=1 00:26:59.904 --rc geninfo_all_blocks=1 00:26:59.904 --rc geninfo_unexecuted_blocks=1 00:26:59.904 00:26:59.904 ' 00:26:59.904 17:05:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:59.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.904 --rc genhtml_branch_coverage=1 00:26:59.904 --rc genhtml_function_coverage=1 00:26:59.904 --rc genhtml_legend=1 00:26:59.904 --rc geninfo_all_blocks=1 00:26:59.904 --rc geninfo_unexecuted_blocks=1 00:26:59.904 00:26:59.904 ' 00:26:59.904 17:05:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:59.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.904 --rc genhtml_branch_coverage=1 00:26:59.904 --rc genhtml_function_coverage=1 00:26:59.904 --rc genhtml_legend=1 00:26:59.904 --rc geninfo_all_blocks=1 00:26:59.904 --rc geninfo_unexecuted_blocks=1 00:26:59.904 00:26:59.904 ' 00:26:59.904 17:05:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:59.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.904 --rc genhtml_branch_coverage=1 00:26:59.904 --rc genhtml_function_coverage=1 00:26:59.904 --rc genhtml_legend=1 00:26:59.904 --rc geninfo_all_blocks=1 00:26:59.904 --rc geninfo_unexecuted_blocks=1 00:26:59.904 00:26:59.904 ' 00:26:59.904 17:05:48 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:59.904 17:05:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.904 17:05:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.904 17:05:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.904 17:05:48 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:59.904 17:05:48 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:59.904 17:05:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:59.904 17:05:48 -- paths/export.sh@5 -- # export PATH 00:26:59.904 17:05:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:59.904 17:05:48 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:26:59.904 17:05:48 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:26:59.904 17:05:48 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:26:59.904 17:05:48 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:26:59.904 17:05:48 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:26:59.904 17:05:48 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:26:59.904 17:05:48 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:26:59.904 17:05:48 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:59.904 17:05:48 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:59.904 17:05:48 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:26:59.904 17:05:48 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:26:59.904 17:05:48 -- dd/common.sh@126 -- # mapfile -t id 00:26:59.904 17:05:48 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:27:00.165 17:05:48 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 113 Data Units Written: 7 Host Read Commands: 2339 Host Write Commands: 111 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:27:00.165 17:05:48 -- dd/common.sh@130 -- # lbaf=04 00:27:00.166 17:05:48 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 113 Data Units Written: 7 Host Read Commands: 2339 Host Write Commands: 111 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:27:00.166 17:05:48 -- dd/common.sh@132 -- # lbaf=4096 00:27:00.166 17:05:48 -- dd/common.sh@134 -- # echo 4096 00:27:00.166 17:05:48 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:27:00.166 17:05:48 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:27:00.166 17:05:48 -- dd/basic_rw.sh@96 -- # : 00:27:00.166 17:05:48 -- dd/basic_rw.sh@96 -- # gen_conf 00:27:00.166 17:05:48 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:27:00.166 17:05:48 -- dd/common.sh@31 -- # xtrace_disable 00:27:00.166 17:05:48 -- common/autotest_common.sh@10 -- # set +x 00:27:00.166 17:05:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:00.166 17:05:48 -- common/autotest_common.sh@10 -- # set +x 00:27:00.166 ************************************ 00:27:00.166 START TEST dd_bs_lt_native_bs 00:27:00.166 ************************************ 00:27:00.166 17:05:48 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:27:00.166 17:05:48 -- common/autotest_common.sh@650 -- # local es=0 00:27:00.166 17:05:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:27:00.166 17:05:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:00.166 17:05:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:00.166 17:05:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:00.166 17:05:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:00.166 17:05:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:00.166 17:05:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:00.166 17:05:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:00.166 17:05:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:00.166 17:05:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:27:00.166 { 00:27:00.166 "subsystems": [ 00:27:00.166 { 00:27:00.166 "subsystem": "bdev", 00:27:00.166 "config": [ 00:27:00.166 { 00:27:00.166 "params": { 00:27:00.166 "trtype": "pcie", 00:27:00.166 "traddr": "0000:00:06.0", 00:27:00.166 "name": "Nvme0" 00:27:00.166 }, 00:27:00.166 "method": "bdev_nvme_attach_controller" 00:27:00.166 }, 00:27:00.166 { 00:27:00.166 "method": "bdev_wait_for_examine" 00:27:00.166 } 00:27:00.166 ] 00:27:00.166 } 00:27:00.166 ] 00:27:00.166 } 00:27:00.166 [2024-11-05 17:05:49.007581] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:00.166 [2024-11-05 17:05:49.007992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133475 ] 00:27:00.425 [2024-11-05 17:05:49.180927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.682 [2024-11-05 17:05:49.413765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.946 [2024-11-05 17:05:49.735249] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:27:00.946 [2024-11-05 17:05:49.735574] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:01.516 [2024-11-05 17:05:50.319709] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:01.774 17:05:50 -- common/autotest_common.sh@653 -- # es=234 00:27:01.774 17:05:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:01.774 17:05:50 -- common/autotest_common.sh@662 -- # es=106 00:27:01.774 17:05:50 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:01.774 17:05:50 -- common/autotest_common.sh@670 -- # es=1 00:27:01.774 ************************************ 00:27:01.774 END TEST dd_bs_lt_native_bs 00:27:01.774 ************************************ 00:27:01.774 17:05:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:01.774 00:27:01.774 real 0m1.720s 00:27:01.774 user 0m1.406s 00:27:01.774 sys 0m0.269s 00:27:01.774 17:05:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:01.774 17:05:50 -- common/autotest_common.sh@10 -- # set +x 00:27:02.033 17:05:50 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:27:02.033 17:05:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:02.033 17:05:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:02.033 17:05:50 -- common/autotest_common.sh@10 -- # set +x 00:27:02.033 ************************************ 00:27:02.033 START TEST dd_rw 00:27:02.033 ************************************ 00:27:02.033 17:05:50 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:27:02.033 17:05:50 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:27:02.033 17:05:50 -- dd/basic_rw.sh@12 -- # local count size 00:27:02.033 17:05:50 -- dd/basic_rw.sh@13 -- # local qds bss 00:27:02.033 17:05:50 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:27:02.033 17:05:50 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:27:02.033 17:05:50 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:27:02.033 17:05:50 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:27:02.033 17:05:50 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:27:02.033 17:05:50 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:27:02.033 17:05:50 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:27:02.033 17:05:50 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:27:02.033 17:05:50 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:02.033 17:05:50 -- dd/basic_rw.sh@23 -- # count=15 00:27:02.033 17:05:50 -- dd/basic_rw.sh@24 -- # count=15 00:27:02.033 17:05:50 -- dd/basic_rw.sh@25 -- # size=61440 00:27:02.033 17:05:50 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:27:02.033 17:05:50 -- dd/common.sh@98 -- # xtrace_disable 00:27:02.033 17:05:50 -- common/autotest_common.sh@10 -- # set +x 00:27:02.601 17:05:51 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:27:02.601 17:05:51 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:02.601 17:05:51 -- dd/common.sh@31 -- # xtrace_disable 00:27:02.601 17:05:51 -- common/autotest_common.sh@10 -- # set +x 00:27:02.601 [2024-11-05 17:05:51.302567] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:02.601 [2024-11-05 17:05:51.303046] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133537 ] 00:27:02.601 { 00:27:02.601 "subsystems": [ 00:27:02.601 { 00:27:02.601 "subsystem": "bdev", 00:27:02.601 "config": [ 00:27:02.601 { 00:27:02.601 "params": { 00:27:02.601 "trtype": "pcie", 00:27:02.601 "traddr": "0000:00:06.0", 00:27:02.601 "name": "Nvme0" 00:27:02.601 }, 00:27:02.601 "method": "bdev_nvme_attach_controller" 00:27:02.601 }, 00:27:02.601 { 00:27:02.601 "method": "bdev_wait_for_examine" 00:27:02.601 } 00:27:02.601 ] 00:27:02.601 } 00:27:02.601 ] 00:27:02.601 } 00:27:02.601 [2024-11-05 17:05:51.453954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.859 [2024-11-05 17:05:51.611490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.117  [2024-11-05T17:05:52.930Z] Copying: 60/60 [kB] (average 19 MBps) 00:27:04.053 00:27:04.053 17:05:52 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:27:04.053 17:05:52 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:04.053 17:05:52 -- dd/common.sh@31 -- # xtrace_disable 00:27:04.053 17:05:52 -- common/autotest_common.sh@10 -- # set +x 00:27:04.053 { 00:27:04.053 "subsystems": [ 00:27:04.053 { 00:27:04.053 "subsystem": "bdev", 00:27:04.053 "config": [ 00:27:04.053 { 00:27:04.053 "params": { 00:27:04.053 "trtype": "pcie", 00:27:04.053 "traddr": "0000:00:06.0", 00:27:04.053 "name": "Nvme0" 00:27:04.053 }, 00:27:04.053 "method": "bdev_nvme_attach_controller" 00:27:04.053 }, 00:27:04.053 { 00:27:04.053 "method": "bdev_wait_for_examine" 00:27:04.053 } 00:27:04.053 ] 00:27:04.053 } 00:27:04.053 ] 00:27:04.053 } 00:27:04.053 [2024-11-05 17:05:52.869001] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:04.053 [2024-11-05 17:05:52.869563] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133560 ] 00:27:04.311 [2024-11-05 17:05:53.037781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.570 [2024-11-05 17:05:53.221558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.828  [2024-11-05T17:05:54.640Z] Copying: 60/60 [kB] (average 19 MBps) 00:27:05.763 00:27:05.763 17:05:54 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:05.763 17:05:54 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:27:05.763 17:05:54 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:05.763 17:05:54 -- dd/common.sh@11 -- # local nvme_ref= 00:27:05.763 17:05:54 -- dd/common.sh@12 -- # local size=61440 00:27:05.763 17:05:54 -- dd/common.sh@14 -- # local bs=1048576 00:27:05.763 17:05:54 -- dd/common.sh@15 -- # local count=1 00:27:05.763 17:05:54 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:05.763 17:05:54 -- dd/common.sh@18 -- # gen_conf 00:27:05.763 17:05:54 -- dd/common.sh@31 -- # xtrace_disable 00:27:05.763 17:05:54 -- common/autotest_common.sh@10 -- # set +x 00:27:05.763 { 00:27:05.763 "subsystems": [ 00:27:05.763 { 00:27:05.763 "subsystem": "bdev", 00:27:05.763 "config": [ 00:27:05.763 { 00:27:05.763 "params": { 00:27:05.763 "trtype": "pcie", 00:27:05.763 "traddr": "0000:00:06.0", 00:27:05.763 "name": "Nvme0" 00:27:05.763 }, 00:27:05.763 "method": "bdev_nvme_attach_controller" 00:27:05.763 }, 00:27:05.763 { 00:27:05.763 "method": "bdev_wait_for_examine" 00:27:05.763 } 00:27:05.763 ] 00:27:05.763 } 00:27:05.763 ] 00:27:05.763 } 00:27:05.763 [2024-11-05 17:05:54.560931] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:05.763 [2024-11-05 17:05:54.561287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133593 ] 00:27:06.024 [2024-11-05 17:05:54.729203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.024 [2024-11-05 17:05:54.889280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.627  [2024-11-05T17:05:56.072Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:27:07.195 00:27:07.195 17:05:56 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:07.195 17:05:56 -- dd/basic_rw.sh@23 -- # count=15 00:27:07.195 17:05:56 -- dd/basic_rw.sh@24 -- # count=15 00:27:07.195 17:05:56 -- dd/basic_rw.sh@25 -- # size=61440 00:27:07.195 17:05:56 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:27:07.195 17:05:56 -- dd/common.sh@98 -- # xtrace_disable 00:27:07.195 17:05:56 -- common/autotest_common.sh@10 -- # set +x 00:27:07.760 17:05:56 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:27:07.760 17:05:56 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:07.760 17:05:56 -- dd/common.sh@31 -- # xtrace_disable 00:27:07.760 17:05:56 -- common/autotest_common.sh@10 -- # set +x 00:27:08.017 { 00:27:08.017 "subsystems": [ 00:27:08.017 { 00:27:08.017 "subsystem": "bdev", 00:27:08.017 "config": [ 00:27:08.017 { 00:27:08.017 "params": { 00:27:08.017 "trtype": "pcie", 00:27:08.017 "traddr": "0000:00:06.0", 00:27:08.017 "name": "Nvme0" 00:27:08.017 }, 00:27:08.017 "method": "bdev_nvme_attach_controller" 00:27:08.017 }, 00:27:08.017 { 00:27:08.017 "method": "bdev_wait_for_examine" 00:27:08.017 } 00:27:08.017 ] 00:27:08.017 } 00:27:08.017 ] 00:27:08.017 } 00:27:08.017 [2024-11-05 17:05:56.665654] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:08.017 [2024-11-05 17:05:56.666169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133620 ] 00:27:08.017 [2024-11-05 17:05:56.833967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.275 [2024-11-05 17:05:56.987559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.533  [2024-11-05T17:05:58.345Z] Copying: 60/60 [kB] (average 58 MBps) 00:27:09.468 00:27:09.468 17:05:58 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:27:09.468 17:05:58 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:09.468 17:05:58 -- dd/common.sh@31 -- # xtrace_disable 00:27:09.468 17:05:58 -- common/autotest_common.sh@10 -- # set +x 00:27:09.468 { 00:27:09.468 "subsystems": [ 00:27:09.468 { 00:27:09.468 "subsystem": "bdev", 00:27:09.468 "config": [ 00:27:09.468 { 00:27:09.468 "params": { 00:27:09.468 "trtype": "pcie", 00:27:09.468 "traddr": "0000:00:06.0", 00:27:09.468 "name": "Nvme0" 00:27:09.468 }, 00:27:09.468 "method": "bdev_nvme_attach_controller" 00:27:09.468 }, 00:27:09.468 { 00:27:09.468 "method": "bdev_wait_for_examine" 00:27:09.468 } 00:27:09.468 ] 00:27:09.468 } 00:27:09.468 ] 00:27:09.468 } 00:27:09.468 [2024-11-05 17:05:58.322485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:09.468 [2024-11-05 17:05:58.323122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133645 ] 00:27:09.726 [2024-11-05 17:05:58.493947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.984 [2024-11-05 17:05:58.653166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.242  [2024-11-05T17:06:00.054Z] Copying: 60/60 [kB] (average 58 MBps) 00:27:11.177 00:27:11.177 17:05:59 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:11.177 17:05:59 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:27:11.177 17:05:59 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:11.177 17:05:59 -- dd/common.sh@11 -- # local nvme_ref= 00:27:11.177 17:05:59 -- dd/common.sh@12 -- # local size=61440 00:27:11.177 17:05:59 -- dd/common.sh@14 -- # local bs=1048576 00:27:11.177 17:05:59 -- dd/common.sh@15 -- # local count=1 00:27:11.177 17:05:59 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:11.177 17:05:59 -- dd/common.sh@18 -- # gen_conf 00:27:11.177 17:05:59 -- dd/common.sh@31 -- # xtrace_disable 00:27:11.177 17:05:59 -- common/autotest_common.sh@10 -- # set +x 00:27:11.177 [2024-11-05 17:05:59.899476] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:11.177 [2024-11-05 17:05:59.899885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133673 ] 00:27:11.177 { 00:27:11.177 "subsystems": [ 00:27:11.177 { 00:27:11.177 "subsystem": "bdev", 00:27:11.177 "config": [ 00:27:11.177 { 00:27:11.177 "params": { 00:27:11.177 "trtype": "pcie", 00:27:11.177 "traddr": "0000:00:06.0", 00:27:11.177 "name": "Nvme0" 00:27:11.177 }, 00:27:11.177 "method": "bdev_nvme_attach_controller" 00:27:11.177 }, 00:27:11.177 { 00:27:11.177 "method": "bdev_wait_for_examine" 00:27:11.177 } 00:27:11.177 ] 00:27:11.177 } 00:27:11.177 ] 00:27:11.177 } 00:27:11.177 [2024-11-05 17:06:00.053279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.435 [2024-11-05 17:06:00.225475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.694  [2024-11-05T17:06:01.505Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:12.628 00:27:12.628 17:06:01 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:27:12.628 17:06:01 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:12.628 17:06:01 -- dd/basic_rw.sh@23 -- # count=7 00:27:12.628 17:06:01 -- dd/basic_rw.sh@24 -- # count=7 00:27:12.628 17:06:01 -- dd/basic_rw.sh@25 -- # size=57344 00:27:12.628 17:06:01 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:27:12.628 17:06:01 -- dd/common.sh@98 -- # xtrace_disable 00:27:12.628 17:06:01 -- common/autotest_common.sh@10 -- # set +x 00:27:13.193 17:06:02 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:27:13.193 17:06:02 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:13.193 17:06:02 -- dd/common.sh@31 -- # xtrace_disable 00:27:13.193 17:06:02 -- common/autotest_common.sh@10 -- # set +x 00:27:13.193 [2024-11-05 17:06:02.068344] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:13.193 [2024-11-05 17:06:02.068752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133705 ] 00:27:13.193 { 00:27:13.193 "subsystems": [ 00:27:13.193 { 00:27:13.194 "subsystem": "bdev", 00:27:13.194 "config": [ 00:27:13.194 { 00:27:13.194 "params": { 00:27:13.194 "trtype": "pcie", 00:27:13.194 "traddr": "0000:00:06.0", 00:27:13.194 "name": "Nvme0" 00:27:13.194 }, 00:27:13.194 "method": "bdev_nvme_attach_controller" 00:27:13.194 }, 00:27:13.194 { 00:27:13.194 "method": "bdev_wait_for_examine" 00:27:13.194 } 00:27:13.194 ] 00:27:13.194 } 00:27:13.194 ] 00:27:13.194 } 00:27:13.452 [2024-11-05 17:06:02.222586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.709 [2024-11-05 17:06:02.406684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.967  [2024-11-05T17:06:03.779Z] Copying: 56/56 [kB] (average 54 MBps) 00:27:14.902 00:27:14.902 17:06:03 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:27:14.902 17:06:03 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:14.902 17:06:03 -- dd/common.sh@31 -- # xtrace_disable 00:27:14.902 17:06:03 -- common/autotest_common.sh@10 -- # set +x 00:27:14.902 [2024-11-05 17:06:03.677571] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:14.902 [2024-11-05 17:06:03.677969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133734 ] 00:27:14.902 { 00:27:14.902 "subsystems": [ 00:27:14.902 { 00:27:14.902 "subsystem": "bdev", 00:27:14.902 "config": [ 00:27:14.902 { 00:27:14.902 "params": { 00:27:14.902 "trtype": "pcie", 00:27:14.902 "traddr": "0000:00:06.0", 00:27:14.902 "name": "Nvme0" 00:27:14.902 }, 00:27:14.902 "method": "bdev_nvme_attach_controller" 00:27:14.902 }, 00:27:14.902 { 00:27:14.902 "method": "bdev_wait_for_examine" 00:27:14.902 } 00:27:14.902 ] 00:27:14.902 } 00:27:14.902 ] 00:27:14.902 } 00:27:15.160 [2024-11-05 17:06:03.828466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.160 [2024-11-05 17:06:03.988378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.418  [2024-11-05T17:06:05.229Z] Copying: 56/56 [kB] (average 27 MBps) 00:27:16.352 00:27:16.610 17:06:05 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:16.610 17:06:05 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:27:16.610 17:06:05 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:16.610 17:06:05 -- dd/common.sh@11 -- # local nvme_ref= 00:27:16.610 17:06:05 -- dd/common.sh@12 -- # local size=57344 00:27:16.610 17:06:05 -- dd/common.sh@14 -- # local bs=1048576 00:27:16.610 17:06:05 -- dd/common.sh@15 -- # local count=1 00:27:16.610 17:06:05 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:16.610 17:06:05 -- dd/common.sh@18 -- # gen_conf 00:27:16.610 17:06:05 -- dd/common.sh@31 -- # xtrace_disable 00:27:16.610 17:06:05 -- common/autotest_common.sh@10 -- # set +x 00:27:16.610 [2024-11-05 17:06:05.306451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:16.610 [2024-11-05 17:06:05.306886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133755 ] 00:27:16.610 { 00:27:16.610 "subsystems": [ 00:27:16.610 { 00:27:16.610 "subsystem": "bdev", 00:27:16.610 "config": [ 00:27:16.610 { 00:27:16.610 "params": { 00:27:16.610 "trtype": "pcie", 00:27:16.610 "traddr": "0000:00:06.0", 00:27:16.610 "name": "Nvme0" 00:27:16.610 }, 00:27:16.610 "method": "bdev_nvme_attach_controller" 00:27:16.610 }, 00:27:16.610 { 00:27:16.610 "method": "bdev_wait_for_examine" 00:27:16.610 } 00:27:16.610 ] 00:27:16.610 } 00:27:16.610 ] 00:27:16.610 } 00:27:16.610 [2024-11-05 17:06:05.458890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.868 [2024-11-05 17:06:05.621014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.127  [2024-11-05T17:06:06.937Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:27:18.060 00:27:18.060 17:06:06 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:18.060 17:06:06 -- dd/basic_rw.sh@23 -- # count=7 00:27:18.060 17:06:06 -- dd/basic_rw.sh@24 -- # count=7 00:27:18.060 17:06:06 -- dd/basic_rw.sh@25 -- # size=57344 00:27:18.060 17:06:06 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:27:18.060 17:06:06 -- dd/common.sh@98 -- # xtrace_disable 00:27:18.060 17:06:06 -- common/autotest_common.sh@10 -- # set +x 00:27:18.625 17:06:07 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:27:18.625 17:06:07 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:18.625 17:06:07 -- dd/common.sh@31 -- # xtrace_disable 00:27:18.625 17:06:07 -- common/autotest_common.sh@10 -- # set +x 00:27:18.625 { 00:27:18.625 "subsystems": [ 00:27:18.625 { 00:27:18.625 "subsystem": "bdev", 00:27:18.625 "config": [ 00:27:18.625 { 00:27:18.625 "params": { 00:27:18.625 "trtype": "pcie", 00:27:18.625 "traddr": "0000:00:06.0", 00:27:18.625 "name": "Nvme0" 00:27:18.625 }, 00:27:18.625 "method": "bdev_nvme_attach_controller" 00:27:18.625 }, 00:27:18.625 { 00:27:18.625 "method": "bdev_wait_for_examine" 00:27:18.625 } 00:27:18.625 ] 00:27:18.625 } 00:27:18.625 ] 00:27:18.625 } 00:27:18.625 [2024-11-05 17:06:07.371839] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:18.625 [2024-11-05 17:06:07.372398] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133787 ] 00:27:18.883 [2024-11-05 17:06:07.538970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.883 [2024-11-05 17:06:07.698444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.141  [2024-11-05T17:06:08.952Z] Copying: 56/56 [kB] (average 54 MBps) 00:27:20.075 00:27:20.075 17:06:08 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:27:20.075 17:06:08 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:20.075 17:06:08 -- dd/common.sh@31 -- # xtrace_disable 00:27:20.075 17:06:08 -- common/autotest_common.sh@10 -- # set +x 00:27:20.333 { 00:27:20.333 "subsystems": [ 00:27:20.333 { 00:27:20.333 "subsystem": "bdev", 00:27:20.333 "config": [ 00:27:20.333 { 00:27:20.333 "params": { 00:27:20.333 "trtype": "pcie", 00:27:20.333 "traddr": "0000:00:06.0", 00:27:20.333 "name": "Nvme0" 00:27:20.333 }, 00:27:20.333 "method": "bdev_nvme_attach_controller" 00:27:20.333 }, 00:27:20.333 { 00:27:20.333 "method": "bdev_wait_for_examine" 00:27:20.333 } 00:27:20.333 ] 00:27:20.333 } 00:27:20.333 ] 00:27:20.333 } 00:27:20.333 [2024-11-05 17:06:09.024352] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:20.333 [2024-11-05 17:06:09.024703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133814 ] 00:27:20.333 [2024-11-05 17:06:09.193199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.591 [2024-11-05 17:06:09.346882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.849  [2024-11-05T17:06:10.679Z] Copying: 56/56 [kB] (average 54 MBps) 00:27:21.802 00:27:21.802 17:06:10 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:21.802 17:06:10 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:27:21.802 17:06:10 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:21.802 17:06:10 -- dd/common.sh@11 -- # local nvme_ref= 00:27:21.802 17:06:10 -- dd/common.sh@12 -- # local size=57344 00:27:21.802 17:06:10 -- dd/common.sh@14 -- # local bs=1048576 00:27:21.802 17:06:10 -- dd/common.sh@15 -- # local count=1 00:27:21.802 17:06:10 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:21.802 17:06:10 -- dd/common.sh@18 -- # gen_conf 00:27:21.802 17:06:10 -- dd/common.sh@31 -- # xtrace_disable 00:27:21.802 17:06:10 -- common/autotest_common.sh@10 -- # set +x 00:27:21.802 { 00:27:21.802 "subsystems": [ 00:27:21.802 { 00:27:21.802 "subsystem": "bdev", 00:27:21.802 "config": [ 00:27:21.802 { 00:27:21.802 "params": { 00:27:21.802 "trtype": "pcie", 00:27:21.802 "traddr": "0000:00:06.0", 00:27:21.802 "name": "Nvme0" 00:27:21.802 }, 00:27:21.802 "method": "bdev_nvme_attach_controller" 00:27:21.802 }, 00:27:21.802 { 00:27:21.802 "method": "bdev_wait_for_examine" 00:27:21.802 } 00:27:21.802 ] 00:27:21.802 } 00:27:21.802 ] 00:27:21.802 } 00:27:21.802 [2024-11-05 17:06:10.614509] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:21.802 [2024-11-05 17:06:10.614875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133840 ] 00:27:22.060 [2024-11-05 17:06:10.784218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.060 [2024-11-05 17:06:10.945822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.626  [2024-11-05T17:06:12.438Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:23.561 00:27:23.561 17:06:12 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:27:23.561 17:06:12 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:23.561 17:06:12 -- dd/basic_rw.sh@23 -- # count=3 00:27:23.561 17:06:12 -- dd/basic_rw.sh@24 -- # count=3 00:27:23.561 17:06:12 -- dd/basic_rw.sh@25 -- # size=49152 00:27:23.561 17:06:12 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:27:23.561 17:06:12 -- dd/common.sh@98 -- # xtrace_disable 00:27:23.561 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:27:23.818 17:06:12 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:27:23.818 17:06:12 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:23.818 17:06:12 -- dd/common.sh@31 -- # xtrace_disable 00:27:23.818 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:27:23.818 { 00:27:23.818 "subsystems": [ 00:27:23.818 { 00:27:23.818 "subsystem": "bdev", 00:27:23.818 "config": [ 00:27:23.818 { 00:27:23.818 "params": { 00:27:23.818 "trtype": "pcie", 00:27:23.818 "traddr": "0000:00:06.0", 00:27:23.818 "name": "Nvme0" 00:27:23.818 }, 00:27:23.818 "method": "bdev_nvme_attach_controller" 00:27:23.818 }, 00:27:23.818 { 00:27:23.818 "method": "bdev_wait_for_examine" 00:27:23.818 } 00:27:23.818 ] 00:27:23.818 } 00:27:23.818 ] 00:27:23.818 } 00:27:23.818 [2024-11-05 17:06:12.711020] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:23.818 [2024-11-05 17:06:12.711411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133881 ] 00:27:24.076 [2024-11-05 17:06:12.880607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.333 [2024-11-05 17:06:13.035493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.591  [2024-11-05T17:06:14.402Z] Copying: 48/48 [kB] (average 46 MBps) 00:27:25.525 00:27:25.525 17:06:14 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:27:25.525 17:06:14 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:25.525 17:06:14 -- dd/common.sh@31 -- # xtrace_disable 00:27:25.525 17:06:14 -- common/autotest_common.sh@10 -- # set +x 00:27:25.525 [2024-11-05 17:06:14.265256] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:25.525 [2024-11-05 17:06:14.265642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133902 ] 00:27:25.525 { 00:27:25.525 "subsystems": [ 00:27:25.525 { 00:27:25.525 "subsystem": "bdev", 00:27:25.525 "config": [ 00:27:25.525 { 00:27:25.525 "params": { 00:27:25.525 "trtype": "pcie", 00:27:25.525 "traddr": "0000:00:06.0", 00:27:25.525 "name": "Nvme0" 00:27:25.525 }, 00:27:25.525 "method": "bdev_nvme_attach_controller" 00:27:25.525 }, 00:27:25.525 { 00:27:25.525 "method": "bdev_wait_for_examine" 00:27:25.525 } 00:27:25.525 ] 00:27:25.525 } 00:27:25.525 ] 00:27:25.525 } 00:27:25.525 [2024-11-05 17:06:14.414730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.783 [2024-11-05 17:06:14.569314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.041  [2024-11-05T17:06:15.851Z] Copying: 48/48 [kB] (average 46 MBps) 00:27:26.974 00:27:26.974 17:06:15 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:26.974 17:06:15 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:27:26.974 17:06:15 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:26.974 17:06:15 -- dd/common.sh@11 -- # local nvme_ref= 00:27:26.974 17:06:15 -- dd/common.sh@12 -- # local size=49152 00:27:26.974 17:06:15 -- dd/common.sh@14 -- # local bs=1048576 00:27:26.974 17:06:15 -- dd/common.sh@15 -- # local count=1 00:27:26.974 17:06:15 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:26.974 17:06:15 -- dd/common.sh@18 -- # gen_conf 00:27:26.974 17:06:15 -- dd/common.sh@31 -- # xtrace_disable 00:27:26.974 17:06:15 -- common/autotest_common.sh@10 -- # set +x 00:27:27.232 { 00:27:27.232 "subsystems": [ 00:27:27.232 { 00:27:27.232 "subsystem": "bdev", 00:27:27.232 "config": [ 00:27:27.232 { 00:27:27.232 "params": { 00:27:27.232 "trtype": "pcie", 00:27:27.232 "traddr": "0000:00:06.0", 00:27:27.232 "name": "Nvme0" 00:27:27.232 }, 00:27:27.232 "method": "bdev_nvme_attach_controller" 00:27:27.232 }, 00:27:27.232 { 00:27:27.232 "method": "bdev_wait_for_examine" 00:27:27.232 } 00:27:27.232 ] 00:27:27.232 } 00:27:27.232 ] 00:27:27.232 } 00:27:27.232 [2024-11-05 17:06:15.900188] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:27.232 [2024-11-05 17:06:15.900560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133930 ] 00:27:27.232 [2024-11-05 17:06:16.069741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.490 [2024-11-05 17:06:16.237096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.748  [2024-11-05T17:06:17.560Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:27:28.683 00:27:28.683 17:06:17 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:28.683 17:06:17 -- dd/basic_rw.sh@23 -- # count=3 00:27:28.683 17:06:17 -- dd/basic_rw.sh@24 -- # count=3 00:27:28.683 17:06:17 -- dd/basic_rw.sh@25 -- # size=49152 00:27:28.683 17:06:17 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:27:28.683 17:06:17 -- dd/common.sh@98 -- # xtrace_disable 00:27:28.683 17:06:17 -- common/autotest_common.sh@10 -- # set +x 00:27:29.249 17:06:17 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:27:29.249 17:06:17 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:29.249 17:06:17 -- dd/common.sh@31 -- # xtrace_disable 00:27:29.249 17:06:17 -- common/autotest_common.sh@10 -- # set +x 00:27:29.249 [2024-11-05 17:06:17.902904] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:29.249 [2024-11-05 17:06:17.903287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133962 ] 00:27:29.249 { 00:27:29.249 "subsystems": [ 00:27:29.249 { 00:27:29.249 "subsystem": "bdev", 00:27:29.249 "config": [ 00:27:29.249 { 00:27:29.249 "params": { 00:27:29.249 "trtype": "pcie", 00:27:29.249 "traddr": "0000:00:06.0", 00:27:29.249 "name": "Nvme0" 00:27:29.249 }, 00:27:29.249 "method": "bdev_nvme_attach_controller" 00:27:29.249 }, 00:27:29.249 { 00:27:29.249 "method": "bdev_wait_for_examine" 00:27:29.249 } 00:27:29.249 ] 00:27:29.249 } 00:27:29.249 ] 00:27:29.249 } 00:27:29.249 [2024-11-05 17:06:18.053659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.507 [2024-11-05 17:06:18.209505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.765  [2024-11-05T17:06:19.577Z] Copying: 48/48 [kB] (average 46 MBps) 00:27:30.700 00:27:30.700 17:06:19 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:27:30.700 17:06:19 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:30.700 17:06:19 -- dd/common.sh@31 -- # xtrace_disable 00:27:30.700 17:06:19 -- common/autotest_common.sh@10 -- # set +x 00:27:30.700 [2024-11-05 17:06:19.512555] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:30.700 [2024-11-05 17:06:19.512959] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133982 ] 00:27:30.700 { 00:27:30.700 "subsystems": [ 00:27:30.700 { 00:27:30.700 "subsystem": "bdev", 00:27:30.700 "config": [ 00:27:30.700 { 00:27:30.700 "params": { 00:27:30.700 "trtype": "pcie", 00:27:30.700 "traddr": "0000:00:06.0", 00:27:30.700 "name": "Nvme0" 00:27:30.700 }, 00:27:30.700 "method": "bdev_nvme_attach_controller" 00:27:30.700 }, 00:27:30.700 { 00:27:30.700 "method": "bdev_wait_for_examine" 00:27:30.700 } 00:27:30.700 ] 00:27:30.700 } 00:27:30.700 ] 00:27:30.700 } 00:27:30.960 [2024-11-05 17:06:19.665839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.960 [2024-11-05 17:06:19.835863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.527  [2024-11-05T17:06:21.339Z] Copying: 48/48 [kB] (average 46 MBps) 00:27:32.462 00:27:32.462 17:06:21 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:32.462 17:06:21 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:27:32.462 17:06:21 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:32.462 17:06:21 -- dd/common.sh@11 -- # local nvme_ref= 00:27:32.462 17:06:21 -- dd/common.sh@12 -- # local size=49152 00:27:32.462 17:06:21 -- dd/common.sh@14 -- # local bs=1048576 00:27:32.462 17:06:21 -- dd/common.sh@15 -- # local count=1 00:27:32.462 17:06:21 -- dd/common.sh@18 -- # gen_conf 00:27:32.462 17:06:21 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:32.462 17:06:21 -- dd/common.sh@31 -- # xtrace_disable 00:27:32.462 17:06:21 -- common/autotest_common.sh@10 -- # set +x 00:27:32.462 { 00:27:32.462 "subsystems": [ 00:27:32.462 { 00:27:32.462 "subsystem": "bdev", 00:27:32.462 "config": [ 00:27:32.462 { 00:27:32.462 "params": { 00:27:32.462 "trtype": "pcie", 00:27:32.462 "traddr": "0000:00:06.0", 00:27:32.462 "name": "Nvme0" 00:27:32.462 }, 00:27:32.462 "method": "bdev_nvme_attach_controller" 00:27:32.462 }, 00:27:32.462 { 00:27:32.462 "method": "bdev_wait_for_examine" 00:27:32.462 } 00:27:32.462 ] 00:27:32.462 } 00:27:32.462 ] 00:27:32.462 } 00:27:32.462 [2024-11-05 17:06:21.183414] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:32.462 [2024-11-05 17:06:21.183788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134015 ] 00:27:32.462 [2024-11-05 17:06:21.352854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.721 [2024-11-05 17:06:21.511085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.978  [2024-11-05T17:06:22.790Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:33.913 00:27:33.913 ************************************ 00:27:33.913 END TEST dd_rw 00:27:33.913 ************************************ 00:27:33.913 00:27:33.913 real 0m32.069s 00:27:33.913 user 0m26.389s 00:27:33.913 sys 0m4.399s 00:27:33.913 17:06:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:33.913 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:27:34.171 17:06:22 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:27:34.171 17:06:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:34.171 17:06:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:34.171 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:27:34.171 ************************************ 00:27:34.171 START TEST dd_rw_offset 00:27:34.171 ************************************ 00:27:34.171 17:06:22 -- common/autotest_common.sh@1114 -- # basic_offset 00:27:34.171 17:06:22 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:27:34.171 17:06:22 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:27:34.171 17:06:22 -- dd/common.sh@98 -- # xtrace_disable 00:27:34.171 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:27:34.171 17:06:22 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:27:34.171 17:06:22 -- dd/basic_rw.sh@56 -- # data=t47esliauox1teftqkvguvh344ryyutkdo403xsb8arjllwl3qnorkf6ggvtiluks311t8yxv52bxr0012imtv88credbu1k0mdrh75kcgpbi6fnptxbcitybbp9b67he57m9emtpn4cwvldgpyie5rkl08c814fqqyby0bwc79owkh2jqs9idzvvvswpt68inad3d3viag0vy2ltgyoe3tkyuodfl560yavoup5xrz09kfibbt7l8fyuttttfa3xkqyhdhyl71gwnmb8c7yzm0mpgrhihbghts3cco5et7mwganvqma3aecahg6v2f7o03vsxrytpeb3j6h2dwtbl8gx8qdbpr4ke44ee4as96rrpz7on9r2bwp50gvv285sk8ontov0hxu44lci1vasd4w9pql0puy9hix7ffseqkfkw8pg7jlihmekamzyddd6ffsrqy30o8mmzkqm86w55cfii1gjdd70wx783mg9pa0y0gccmto2cxcflis8p3hjmm679remo9bze0q0vqdldf3optdg65vaz4127k7wj5ooue31hgjcbijdt86tbvlt2l6bd5t6y0ko76xqqseap2l5vdb8pvy43gxd7w6c0mtorph9i80g6vk0vit6f4h3nlm19jjpswtz7f2fkgrp4r4tz4o3ecbr3ku9xth0gs6yrh6rf17mlqom0axk56c2er7zq7eyutg9cr4gz4no6ovh03xz46hsa1gw1mdt4q14rlpldtnh5738yb6gvfhzlp0ly8hjoy62c8ojal5gil16l3twrenolkd1kk6ugxatg9buj2zdjxppgh53eqws6urgm7fbpjvg8pvy7hpb42czgfzb5rkvee3dzwoxgnbwn4c9v04owx5ryexe11p2d166sqspqi35pwnfrcvjlpi0y8rhj16stv5lejvhvf3hr1d28u50pys3y3tkmjekj5feiwhkcr002tnz5tm45kwe3qr1qerf74bc9e5mz0lzz2ywr0nw526jnxjx5xfqv4nggf7vvn64dm0b9kkm2j9fvcx5hjnu3vck15t0hkiv7iybswuywtb6z2r5c401talqupgzyfohvaykficy4go1ytyyi3zp1q1165o81dl0ddso18aj89i4i5r526k82fgb1wfz327gzw3629mt91l87muus4lgejmensi45q2ljnnhexsi0u4887tt1y3s2wwfpfl5d6bhl0fiy5dyigheq5jfzi0mqo4vj79xkax8i2n65edjadz6in3gsxmzsijuessdsa0e75tb3lxtgo2qqzynafoxp84391u6q0utf2vh2aps9mqs7j9gyyowto09xikm038unn7gtzxdow2wf96k8mejcvadzwj1v7h4gcdhilxg7jup4obq1qnauxdnv1gxmimtj57xxtr8le5onoajvb97ty1jaq4oyuc4lehyak4k0v8igvia924eusbuwcxt9x2mvc766yvptrk8f2sgxge1l6btt8oc0u08emmoo230wndd0o1rmvqe0okck5o3bejoravofntii880p0o3ul105f4uzazmpk4pla67khwcvf6ksxbcifpq81uyzlpiplhcofavk0do2kabqopsmflvpq2d4af2he343dn1r3qgsd9x54xm4arfif1iiqi0r89098fclu0mbdvs3h6sta0kfxaex3kb0e4v2d6xhdqvfm3zxplkm1arpad47w0s6vj5433n6y801jgg2a9jhju8tg15edm88i6jpnmote264237tpcqa5exn33ld7f2wk8898a7bgmqo6tpkklmt3271p6wvduerdmcfn8ox3mg8uid2x7om79bb8mpx4f3r1nynwdwy8jb29b4940u6dfja2nw9xoemenopplcgz2bhxg038sj6t8zid7fopw4d2rz6vn2sgiljl6jmyhf0h4qythsq757m3yovg8gj72mfwnhr7rky8psk1ccrzkj9npt9ej8ufry4mkvjtuhdfpobyczmxd2v01nx793ba5yynktbe9009vyg98tmljuhqn8e7w0mbn6u0py6xcg2n8clv8l6kni1s8dy03nh5c7nr3uuao9953873pv14h1fhrtqvpo25ozel7d9v72gzyg85ztht9bnpjk5gp14g84u1qudvlxxkanu9vhlq96nh5y5q4duycj0lfvs1h03gfdeulvslvtrfbvtswggrempij28gc0alhu9g3eq5tl2dbfuykuvzed5r2ckdsoamf2jxhc6d0yw8ummiip3nlz689tqjeo8sr82ltp9yhok0li204b3twpk9u3szuxu209vghszh8jniwoelq430uiq5b9lqouzwsqj4osbaxvmk60n1538bb6essj41t0opjmamm9waeuc3u5gqnw6t3v8rbmjw1y4lrijmks5k7182dsrs63xo1nld9cbb91pbz1gbzknpxf8shebvewz296xqz8hut5djtg5tp8h6gclnvwjwhx7c6roem5we9nm96214ovddwmqi6x6axt6g98gvzlnm291akyj9g2r5l04m65djk9cxmye3zp4mu4gcsg8wpx4se7l92skiseh4sisuylv3ngc1fmf690opc0j055fmxfg58g6kkzx477nsug3tvjxs6d8g1hp3yknwz9wza0fnui1hh40pkk1y6lhfuoslntil1uixd05h3k9mgzixode01hgg0hphze3fc3oiwvvypslytnbdqy5voyjt4mdp7cczwai04sn26cp3811uormgwiccvueb7wp8qlyjkyprmkdeogfc342tms1eb9yz9a8w7tj087vzr0mbwn78225c5uuqlvpl7bxz09nn1gjy6jarxejcie6ly9z4hwmdcz2l1hs5ngekp660v2axrp48ow9qpwrpqv7vu47yrsq4gj62aji8g7o1migcs6t3jad1m7lczc14dfrfktyamox5ndjggfglesbnb8rg5vcjkf8ie6fh8ioqe2iyjq6tbapd83vhq2295qrohsyas41y4mxewpmrimh9gv5kgi9arpdzllrhldxl4iwfh6idenm5cp19j95iyjkximk8r9dq7wf7rt7l21nmccwuim85z14gbbx04jsaasxwj3momecqb8bnfbw69lj4dbvnunt6tl7o9i67x94evjvnv3jqto5xx0h1caireoj587p8ns0g1pu3mfu473l8vjhmdjz2mv4mfg62fzb4ju66bfw8celtb4k09ju5rauocka7nbrbtt8rws1h2mmcb3wovc9luiduv5gjf4acmzq7x4snzi2x5a598f5xzpq55qkwz316tzf1suvl8rdyw2uzvvr1pqe7bpv9a072jfan0qivcjo2a5elbz51j5bioobze5mjvey4l0292y6aoqpo7iw0c2g516sfdzb9f0wpqoyeryn95f9sxce742x6qlh6h3nfykcxmwv75e7hpx6mblnsv77rqq4qmzdwarwwaqdajbf0yz5rla0xb32d2bfznihylui2ozdzatzfi5j351nq3l61mnhns3j30odkbrq5wnqv98sb01vp19cjbc59ana9ly50jtadroggh65ad8vfd7ifshwatemn9634i8iuac7v8wl1xzk97ou4q91z2s2548ye9zo1nkcabsni3oqcaugatx47q5676kfo8lt2j3vaznffh79pu8zrsq3x9qluguq424qp5g5eh5ozgljirkpg49ydkd0qbmw0u52v0oh5ghp4y519x6db895bbzcmrjqra7geg1abut6vxrygjkwfha1mu55ywab3hy17iyb3ibazp5twr0xhvtzgte450ohdgqs071e01kcahpw85xepiece5vctpmztioddnrjecqffvgelmm3cgd26v77ls1n5vu91bmw2vzfuben46kx1swg5hd21u3cs1bs8w2t0i9w3jsj9ked7fb5leccyv3sufy87zwjabg3ij7o155a7mo5d1txal2ji720utow8oecg12hs56bs73yjpz2hm6v1rdec1o9xtqy8lrvp5rlhzr759nenszv5wj0kra8yh 00:27:34.171 17:06:22 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:27:34.171 17:06:22 -- dd/basic_rw.sh@59 -- # gen_conf 00:27:34.172 17:06:22 -- dd/common.sh@31 -- # xtrace_disable 00:27:34.172 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:27:34.172 { 00:27:34.172 "subsystems": [ 00:27:34.172 { 00:27:34.172 "subsystem": "bdev", 00:27:34.172 "config": [ 00:27:34.172 { 00:27:34.172 "params": { 00:27:34.172 "trtype": "pcie", 00:27:34.172 "traddr": "0000:00:06.0", 00:27:34.172 "name": "Nvme0" 00:27:34.172 }, 00:27:34.172 "method": "bdev_nvme_attach_controller" 00:27:34.172 }, 00:27:34.172 { 00:27:34.172 "method": "bdev_wait_for_examine" 00:27:34.172 } 00:27:34.172 ] 00:27:34.172 } 00:27:34.172 ] 00:27:34.172 } 00:27:34.172 [2024-11-05 17:06:22.932943] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:34.172 [2024-11-05 17:06:22.933167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134065 ] 00:27:34.430 [2024-11-05 17:06:23.105473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.430 [2024-11-05 17:06:23.266173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.997  [2024-11-05T17:06:24.439Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:27:35.562 00:27:35.562 17:06:24 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:27:35.562 17:06:24 -- dd/basic_rw.sh@65 -- # gen_conf 00:27:35.562 17:06:24 -- dd/common.sh@31 -- # xtrace_disable 00:27:35.562 17:06:24 -- common/autotest_common.sh@10 -- # set +x 00:27:35.820 { 00:27:35.820 "subsystems": [ 00:27:35.820 { 00:27:35.820 "subsystem": "bdev", 00:27:35.820 "config": [ 00:27:35.820 { 00:27:35.820 "params": { 00:27:35.820 "trtype": "pcie", 00:27:35.820 "traddr": "0000:00:06.0", 00:27:35.820 "name": "Nvme0" 00:27:35.820 }, 00:27:35.820 "method": "bdev_nvme_attach_controller" 00:27:35.820 }, 00:27:35.820 { 00:27:35.820 "method": "bdev_wait_for_examine" 00:27:35.820 } 00:27:35.820 ] 00:27:35.820 } 00:27:35.820 ] 00:27:35.820 } 00:27:35.820 [2024-11-05 17:06:24.517802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:35.820 [2024-11-05 17:06:24.517999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134089 ] 00:27:35.820 [2024-11-05 17:06:24.686341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.077 [2024-11-05 17:06:24.845722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.335  [2024-11-05T17:06:26.185Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:27:37.308 00:27:37.308 17:06:26 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:27:37.308 ************************************ 00:27:37.308 END TEST dd_rw_offset 00:27:37.308 ************************************ 00:27:37.308 17:06:26 -- dd/basic_rw.sh@72 -- # [[ t47esliauox1teftqkvguvh344ryyutkdo403xsb8arjllwl3qnorkf6ggvtiluks311t8yxv52bxr0012imtv88credbu1k0mdrh75kcgpbi6fnptxbcitybbp9b67he57m9emtpn4cwvldgpyie5rkl08c814fqqyby0bwc79owkh2jqs9idzvvvswpt68inad3d3viag0vy2ltgyoe3tkyuodfl560yavoup5xrz09kfibbt7l8fyuttttfa3xkqyhdhyl71gwnmb8c7yzm0mpgrhihbghts3cco5et7mwganvqma3aecahg6v2f7o03vsxrytpeb3j6h2dwtbl8gx8qdbpr4ke44ee4as96rrpz7on9r2bwp50gvv285sk8ontov0hxu44lci1vasd4w9pql0puy9hix7ffseqkfkw8pg7jlihmekamzyddd6ffsrqy30o8mmzkqm86w55cfii1gjdd70wx783mg9pa0y0gccmto2cxcflis8p3hjmm679remo9bze0q0vqdldf3optdg65vaz4127k7wj5ooue31hgjcbijdt86tbvlt2l6bd5t6y0ko76xqqseap2l5vdb8pvy43gxd7w6c0mtorph9i80g6vk0vit6f4h3nlm19jjpswtz7f2fkgrp4r4tz4o3ecbr3ku9xth0gs6yrh6rf17mlqom0axk56c2er7zq7eyutg9cr4gz4no6ovh03xz46hsa1gw1mdt4q14rlpldtnh5738yb6gvfhzlp0ly8hjoy62c8ojal5gil16l3twrenolkd1kk6ugxatg9buj2zdjxppgh53eqws6urgm7fbpjvg8pvy7hpb42czgfzb5rkvee3dzwoxgnbwn4c9v04owx5ryexe11p2d166sqspqi35pwnfrcvjlpi0y8rhj16stv5lejvhvf3hr1d28u50pys3y3tkmjekj5feiwhkcr002tnz5tm45kwe3qr1qerf74bc9e5mz0lzz2ywr0nw526jnxjx5xfqv4nggf7vvn64dm0b9kkm2j9fvcx5hjnu3vck15t0hkiv7iybswuywtb6z2r5c401talqupgzyfohvaykficy4go1ytyyi3zp1q1165o81dl0ddso18aj89i4i5r526k82fgb1wfz327gzw3629mt91l87muus4lgejmensi45q2ljnnhexsi0u4887tt1y3s2wwfpfl5d6bhl0fiy5dyigheq5jfzi0mqo4vj79xkax8i2n65edjadz6in3gsxmzsijuessdsa0e75tb3lxtgo2qqzynafoxp84391u6q0utf2vh2aps9mqs7j9gyyowto09xikm038unn7gtzxdow2wf96k8mejcvadzwj1v7h4gcdhilxg7jup4obq1qnauxdnv1gxmimtj57xxtr8le5onoajvb97ty1jaq4oyuc4lehyak4k0v8igvia924eusbuwcxt9x2mvc766yvptrk8f2sgxge1l6btt8oc0u08emmoo230wndd0o1rmvqe0okck5o3bejoravofntii880p0o3ul105f4uzazmpk4pla67khwcvf6ksxbcifpq81uyzlpiplhcofavk0do2kabqopsmflvpq2d4af2he343dn1r3qgsd9x54xm4arfif1iiqi0r89098fclu0mbdvs3h6sta0kfxaex3kb0e4v2d6xhdqvfm3zxplkm1arpad47w0s6vj5433n6y801jgg2a9jhju8tg15edm88i6jpnmote264237tpcqa5exn33ld7f2wk8898a7bgmqo6tpkklmt3271p6wvduerdmcfn8ox3mg8uid2x7om79bb8mpx4f3r1nynwdwy8jb29b4940u6dfja2nw9xoemenopplcgz2bhxg038sj6t8zid7fopw4d2rz6vn2sgiljl6jmyhf0h4qythsq757m3yovg8gj72mfwnhr7rky8psk1ccrzkj9npt9ej8ufry4mkvjtuhdfpobyczmxd2v01nx793ba5yynktbe9009vyg98tmljuhqn8e7w0mbn6u0py6xcg2n8clv8l6kni1s8dy03nh5c7nr3uuao9953873pv14h1fhrtqvpo25ozel7d9v72gzyg85ztht9bnpjk5gp14g84u1qudvlxxkanu9vhlq96nh5y5q4duycj0lfvs1h03gfdeulvslvtrfbvtswggrempij28gc0alhu9g3eq5tl2dbfuykuvzed5r2ckdsoamf2jxhc6d0yw8ummiip3nlz689tqjeo8sr82ltp9yhok0li204b3twpk9u3szuxu209vghszh8jniwoelq430uiq5b9lqouzwsqj4osbaxvmk60n1538bb6essj41t0opjmamm9waeuc3u5gqnw6t3v8rbmjw1y4lrijmks5k7182dsrs63xo1nld9cbb91pbz1gbzknpxf8shebvewz296xqz8hut5djtg5tp8h6gclnvwjwhx7c6roem5we9nm96214ovddwmqi6x6axt6g98gvzlnm291akyj9g2r5l04m65djk9cxmye3zp4mu4gcsg8wpx4se7l92skiseh4sisuylv3ngc1fmf690opc0j055fmxfg58g6kkzx477nsug3tvjxs6d8g1hp3yknwz9wza0fnui1hh40pkk1y6lhfuoslntil1uixd05h3k9mgzixode01hgg0hphze3fc3oiwvvypslytnbdqy5voyjt4mdp7cczwai04sn26cp3811uormgwiccvueb7wp8qlyjkyprmkdeogfc342tms1eb9yz9a8w7tj087vzr0mbwn78225c5uuqlvpl7bxz09nn1gjy6jarxejcie6ly9z4hwmdcz2l1hs5ngekp660v2axrp48ow9qpwrpqv7vu47yrsq4gj62aji8g7o1migcs6t3jad1m7lczc14dfrfktyamox5ndjggfglesbnb8rg5vcjkf8ie6fh8ioqe2iyjq6tbapd83vhq2295qrohsyas41y4mxewpmrimh9gv5kgi9arpdzllrhldxl4iwfh6idenm5cp19j95iyjkximk8r9dq7wf7rt7l21nmccwuim85z14gbbx04jsaasxwj3momecqb8bnfbw69lj4dbvnunt6tl7o9i67x94evjvnv3jqto5xx0h1caireoj587p8ns0g1pu3mfu473l8vjhmdjz2mv4mfg62fzb4ju66bfw8celtb4k09ju5rauocka7nbrbtt8rws1h2mmcb3wovc9luiduv5gjf4acmzq7x4snzi2x5a598f5xzpq55qkwz316tzf1suvl8rdyw2uzvvr1pqe7bpv9a072jfan0qivcjo2a5elbz51j5bioobze5mjvey4l0292y6aoqpo7iw0c2g516sfdzb9f0wpqoyeryn95f9sxce742x6qlh6h3nfykcxmwv75e7hpx6mblnsv77rqq4qmzdwarwwaqdajbf0yz5rla0xb32d2bfznihylui2ozdzatzfi5j351nq3l61mnhns3j30odkbrq5wnqv98sb01vp19cjbc59ana9ly50jtadroggh65ad8vfd7ifshwatemn9634i8iuac7v8wl1xzk97ou4q91z2s2548ye9zo1nkcabsni3oqcaugatx47q5676kfo8lt2j3vaznffh79pu8zrsq3x9qluguq424qp5g5eh5ozgljirkpg49ydkd0qbmw0u52v0oh5ghp4y519x6db895bbzcmrjqra7geg1abut6vxrygjkwfha1mu55ywab3hy17iyb3ibazp5twr0xhvtzgte450ohdgqs071e01kcahpw85xepiece5vctpmztioddnrjecqffvgelmm3cgd26v77ls1n5vu91bmw2vzfuben46kx1swg5hd21u3cs1bs8w2t0i9w3jsj9ked7fb5leccyv3sufy87zwjabg3ij7o155a7mo5d1txal2ji720utow8oecg12hs56bs73yjpz2hm6v1rdec1o9xtqy8lrvp5rlhzr759nenszv5wj0kra8yh == \t\4\7\e\s\l\i\a\u\o\x\1\t\e\f\t\q\k\v\g\u\v\h\3\4\4\r\y\y\u\t\k\d\o\4\0\3\x\s\b\8\a\r\j\l\l\w\l\3\q\n\o\r\k\f\6\g\g\v\t\i\l\u\k\s\3\1\1\t\8\y\x\v\5\2\b\x\r\0\0\1\2\i\m\t\v\8\8\c\r\e\d\b\u\1\k\0\m\d\r\h\7\5\k\c\g\p\b\i\6\f\n\p\t\x\b\c\i\t\y\b\b\p\9\b\6\7\h\e\5\7\m\9\e\m\t\p\n\4\c\w\v\l\d\g\p\y\i\e\5\r\k\l\0\8\c\8\1\4\f\q\q\y\b\y\0\b\w\c\7\9\o\w\k\h\2\j\q\s\9\i\d\z\v\v\v\s\w\p\t\6\8\i\n\a\d\3\d\3\v\i\a\g\0\v\y\2\l\t\g\y\o\e\3\t\k\y\u\o\d\f\l\5\6\0\y\a\v\o\u\p\5\x\r\z\0\9\k\f\i\b\b\t\7\l\8\f\y\u\t\t\t\t\f\a\3\x\k\q\y\h\d\h\y\l\7\1\g\w\n\m\b\8\c\7\y\z\m\0\m\p\g\r\h\i\h\b\g\h\t\s\3\c\c\o\5\e\t\7\m\w\g\a\n\v\q\m\a\3\a\e\c\a\h\g\6\v\2\f\7\o\0\3\v\s\x\r\y\t\p\e\b\3\j\6\h\2\d\w\t\b\l\8\g\x\8\q\d\b\p\r\4\k\e\4\4\e\e\4\a\s\9\6\r\r\p\z\7\o\n\9\r\2\b\w\p\5\0\g\v\v\2\8\5\s\k\8\o\n\t\o\v\0\h\x\u\4\4\l\c\i\1\v\a\s\d\4\w\9\p\q\l\0\p\u\y\9\h\i\x\7\f\f\s\e\q\k\f\k\w\8\p\g\7\j\l\i\h\m\e\k\a\m\z\y\d\d\d\6\f\f\s\r\q\y\3\0\o\8\m\m\z\k\q\m\8\6\w\5\5\c\f\i\i\1\g\j\d\d\7\0\w\x\7\8\3\m\g\9\p\a\0\y\0\g\c\c\m\t\o\2\c\x\c\f\l\i\s\8\p\3\h\j\m\m\6\7\9\r\e\m\o\9\b\z\e\0\q\0\v\q\d\l\d\f\3\o\p\t\d\g\6\5\v\a\z\4\1\2\7\k\7\w\j\5\o\o\u\e\3\1\h\g\j\c\b\i\j\d\t\8\6\t\b\v\l\t\2\l\6\b\d\5\t\6\y\0\k\o\7\6\x\q\q\s\e\a\p\2\l\5\v\d\b\8\p\v\y\4\3\g\x\d\7\w\6\c\0\m\t\o\r\p\h\9\i\8\0\g\6\v\k\0\v\i\t\6\f\4\h\3\n\l\m\1\9\j\j\p\s\w\t\z\7\f\2\f\k\g\r\p\4\r\4\t\z\4\o\3\e\c\b\r\3\k\u\9\x\t\h\0\g\s\6\y\r\h\6\r\f\1\7\m\l\q\o\m\0\a\x\k\5\6\c\2\e\r\7\z\q\7\e\y\u\t\g\9\c\r\4\g\z\4\n\o\6\o\v\h\0\3\x\z\4\6\h\s\a\1\g\w\1\m\d\t\4\q\1\4\r\l\p\l\d\t\n\h\5\7\3\8\y\b\6\g\v\f\h\z\l\p\0\l\y\8\h\j\o\y\6\2\c\8\o\j\a\l\5\g\i\l\1\6\l\3\t\w\r\e\n\o\l\k\d\1\k\k\6\u\g\x\a\t\g\9\b\u\j\2\z\d\j\x\p\p\g\h\5\3\e\q\w\s\6\u\r\g\m\7\f\b\p\j\v\g\8\p\v\y\7\h\p\b\4\2\c\z\g\f\z\b\5\r\k\v\e\e\3\d\z\w\o\x\g\n\b\w\n\4\c\9\v\0\4\o\w\x\5\r\y\e\x\e\1\1\p\2\d\1\6\6\s\q\s\p\q\i\3\5\p\w\n\f\r\c\v\j\l\p\i\0\y\8\r\h\j\1\6\s\t\v\5\l\e\j\v\h\v\f\3\h\r\1\d\2\8\u\5\0\p\y\s\3\y\3\t\k\m\j\e\k\j\5\f\e\i\w\h\k\c\r\0\0\2\t\n\z\5\t\m\4\5\k\w\e\3\q\r\1\q\e\r\f\7\4\b\c\9\e\5\m\z\0\l\z\z\2\y\w\r\0\n\w\5\2\6\j\n\x\j\x\5\x\f\q\v\4\n\g\g\f\7\v\v\n\6\4\d\m\0\b\9\k\k\m\2\j\9\f\v\c\x\5\h\j\n\u\3\v\c\k\1\5\t\0\h\k\i\v\7\i\y\b\s\w\u\y\w\t\b\6\z\2\r\5\c\4\0\1\t\a\l\q\u\p\g\z\y\f\o\h\v\a\y\k\f\i\c\y\4\g\o\1\y\t\y\y\i\3\z\p\1\q\1\1\6\5\o\8\1\d\l\0\d\d\s\o\1\8\a\j\8\9\i\4\i\5\r\5\2\6\k\8\2\f\g\b\1\w\f\z\3\2\7\g\z\w\3\6\2\9\m\t\9\1\l\8\7\m\u\u\s\4\l\g\e\j\m\e\n\s\i\4\5\q\2\l\j\n\n\h\e\x\s\i\0\u\4\8\8\7\t\t\1\y\3\s\2\w\w\f\p\f\l\5\d\6\b\h\l\0\f\i\y\5\d\y\i\g\h\e\q\5\j\f\z\i\0\m\q\o\4\v\j\7\9\x\k\a\x\8\i\2\n\6\5\e\d\j\a\d\z\6\i\n\3\g\s\x\m\z\s\i\j\u\e\s\s\d\s\a\0\e\7\5\t\b\3\l\x\t\g\o\2\q\q\z\y\n\a\f\o\x\p\8\4\3\9\1\u\6\q\0\u\t\f\2\v\h\2\a\p\s\9\m\q\s\7\j\9\g\y\y\o\w\t\o\0\9\x\i\k\m\0\3\8\u\n\n\7\g\t\z\x\d\o\w\2\w\f\9\6\k\8\m\e\j\c\v\a\d\z\w\j\1\v\7\h\4\g\c\d\h\i\l\x\g\7\j\u\p\4\o\b\q\1\q\n\a\u\x\d\n\v\1\g\x\m\i\m\t\j\5\7\x\x\t\r\8\l\e\5\o\n\o\a\j\v\b\9\7\t\y\1\j\a\q\4\o\y\u\c\4\l\e\h\y\a\k\4\k\0\v\8\i\g\v\i\a\9\2\4\e\u\s\b\u\w\c\x\t\9\x\2\m\v\c\7\6\6\y\v\p\t\r\k\8\f\2\s\g\x\g\e\1\l\6\b\t\t\8\o\c\0\u\0\8\e\m\m\o\o\2\3\0\w\n\d\d\0\o\1\r\m\v\q\e\0\o\k\c\k\5\o\3\b\e\j\o\r\a\v\o\f\n\t\i\i\8\8\0\p\0\o\3\u\l\1\0\5\f\4\u\z\a\z\m\p\k\4\p\l\a\6\7\k\h\w\c\v\f\6\k\s\x\b\c\i\f\p\q\8\1\u\y\z\l\p\i\p\l\h\c\o\f\a\v\k\0\d\o\2\k\a\b\q\o\p\s\m\f\l\v\p\q\2\d\4\a\f\2\h\e\3\4\3\d\n\1\r\3\q\g\s\d\9\x\5\4\x\m\4\a\r\f\i\f\1\i\i\q\i\0\r\8\9\0\9\8\f\c\l\u\0\m\b\d\v\s\3\h\6\s\t\a\0\k\f\x\a\e\x\3\k\b\0\e\4\v\2\d\6\x\h\d\q\v\f\m\3\z\x\p\l\k\m\1\a\r\p\a\d\4\7\w\0\s\6\v\j\5\4\3\3\n\6\y\8\0\1\j\g\g\2\a\9\j\h\j\u\8\t\g\1\5\e\d\m\8\8\i\6\j\p\n\m\o\t\e\2\6\4\2\3\7\t\p\c\q\a\5\e\x\n\3\3\l\d\7\f\2\w\k\8\8\9\8\a\7\b\g\m\q\o\6\t\p\k\k\l\m\t\3\2\7\1\p\6\w\v\d\u\e\r\d\m\c\f\n\8\o\x\3\m\g\8\u\i\d\2\x\7\o\m\7\9\b\b\8\m\p\x\4\f\3\r\1\n\y\n\w\d\w\y\8\j\b\2\9\b\4\9\4\0\u\6\d\f\j\a\2\n\w\9\x\o\e\m\e\n\o\p\p\l\c\g\z\2\b\h\x\g\0\3\8\s\j\6\t\8\z\i\d\7\f\o\p\w\4\d\2\r\z\6\v\n\2\s\g\i\l\j\l\6\j\m\y\h\f\0\h\4\q\y\t\h\s\q\7\5\7\m\3\y\o\v\g\8\g\j\7\2\m\f\w\n\h\r\7\r\k\y\8\p\s\k\1\c\c\r\z\k\j\9\n\p\t\9\e\j\8\u\f\r\y\4\m\k\v\j\t\u\h\d\f\p\o\b\y\c\z\m\x\d\2\v\0\1\n\x\7\9\3\b\a\5\y\y\n\k\t\b\e\9\0\0\9\v\y\g\9\8\t\m\l\j\u\h\q\n\8\e\7\w\0\m\b\n\6\u\0\p\y\6\x\c\g\2\n\8\c\l\v\8\l\6\k\n\i\1\s\8\d\y\0\3\n\h\5\c\7\n\r\3\u\u\a\o\9\9\5\3\8\7\3\p\v\1\4\h\1\f\h\r\t\q\v\p\o\2\5\o\z\e\l\7\d\9\v\7\2\g\z\y\g\8\5\z\t\h\t\9\b\n\p\j\k\5\g\p\1\4\g\8\4\u\1\q\u\d\v\l\x\x\k\a\n\u\9\v\h\l\q\9\6\n\h\5\y\5\q\4\d\u\y\c\j\0\l\f\v\s\1\h\0\3\g\f\d\e\u\l\v\s\l\v\t\r\f\b\v\t\s\w\g\g\r\e\m\p\i\j\2\8\g\c\0\a\l\h\u\9\g\3\e\q\5\t\l\2\d\b\f\u\y\k\u\v\z\e\d\5\r\2\c\k\d\s\o\a\m\f\2\j\x\h\c\6\d\0\y\w\8\u\m\m\i\i\p\3\n\l\z\6\8\9\t\q\j\e\o\8\s\r\8\2\l\t\p\9\y\h\o\k\0\l\i\2\0\4\b\3\t\w\p\k\9\u\3\s\z\u\x\u\2\0\9\v\g\h\s\z\h\8\j\n\i\w\o\e\l\q\4\3\0\u\i\q\5\b\9\l\q\o\u\z\w\s\q\j\4\o\s\b\a\x\v\m\k\6\0\n\1\5\3\8\b\b\6\e\s\s\j\4\1\t\0\o\p\j\m\a\m\m\9\w\a\e\u\c\3\u\5\g\q\n\w\6\t\3\v\8\r\b\m\j\w\1\y\4\l\r\i\j\m\k\s\5\k\7\1\8\2\d\s\r\s\6\3\x\o\1\n\l\d\9\c\b\b\9\1\p\b\z\1\g\b\z\k\n\p\x\f\8\s\h\e\b\v\e\w\z\2\9\6\x\q\z\8\h\u\t\5\d\j\t\g\5\t\p\8\h\6\g\c\l\n\v\w\j\w\h\x\7\c\6\r\o\e\m\5\w\e\9\n\m\9\6\2\1\4\o\v\d\d\w\m\q\i\6\x\6\a\x\t\6\g\9\8\g\v\z\l\n\m\2\9\1\a\k\y\j\9\g\2\r\5\l\0\4\m\6\5\d\j\k\9\c\x\m\y\e\3\z\p\4\m\u\4\g\c\s\g\8\w\p\x\4\s\e\7\l\9\2\s\k\i\s\e\h\4\s\i\s\u\y\l\v\3\n\g\c\1\f\m\f\6\9\0\o\p\c\0\j\0\5\5\f\m\x\f\g\5\8\g\6\k\k\z\x\4\7\7\n\s\u\g\3\t\v\j\x\s\6\d\8\g\1\h\p\3\y\k\n\w\z\9\w\z\a\0\f\n\u\i\1\h\h\4\0\p\k\k\1\y\6\l\h\f\u\o\s\l\n\t\i\l\1\u\i\x\d\0\5\h\3\k\9\m\g\z\i\x\o\d\e\0\1\h\g\g\0\h\p\h\z\e\3\f\c\3\o\i\w\v\v\y\p\s\l\y\t\n\b\d\q\y\5\v\o\y\j\t\4\m\d\p\7\c\c\z\w\a\i\0\4\s\n\2\6\c\p\3\8\1\1\u\o\r\m\g\w\i\c\c\v\u\e\b\7\w\p\8\q\l\y\j\k\y\p\r\m\k\d\e\o\g\f\c\3\4\2\t\m\s\1\e\b\9\y\z\9\a\8\w\7\t\j\0\8\7\v\z\r\0\m\b\w\n\7\8\2\2\5\c\5\u\u\q\l\v\p\l\7\b\x\z\0\9\n\n\1\g\j\y\6\j\a\r\x\e\j\c\i\e\6\l\y\9\z\4\h\w\m\d\c\z\2\l\1\h\s\5\n\g\e\k\p\6\6\0\v\2\a\x\r\p\4\8\o\w\9\q\p\w\r\p\q\v\7\v\u\4\7\y\r\s\q\4\g\j\6\2\a\j\i\8\g\7\o\1\m\i\g\c\s\6\t\3\j\a\d\1\m\7\l\c\z\c\1\4\d\f\r\f\k\t\y\a\m\o\x\5\n\d\j\g\g\f\g\l\e\s\b\n\b\8\r\g\5\v\c\j\k\f\8\i\e\6\f\h\8\i\o\q\e\2\i\y\j\q\6\t\b\a\p\d\8\3\v\h\q\2\2\9\5\q\r\o\h\s\y\a\s\4\1\y\4\m\x\e\w\p\m\r\i\m\h\9\g\v\5\k\g\i\9\a\r\p\d\z\l\l\r\h\l\d\x\l\4\i\w\f\h\6\i\d\e\n\m\5\c\p\1\9\j\9\5\i\y\j\k\x\i\m\k\8\r\9\d\q\7\w\f\7\r\t\7\l\2\1\n\m\c\c\w\u\i\m\8\5\z\1\4\g\b\b\x\0\4\j\s\a\a\s\x\w\j\3\m\o\m\e\c\q\b\8\b\n\f\b\w\6\9\l\j\4\d\b\v\n\u\n\t\6\t\l\7\o\9\i\6\7\x\9\4\e\v\j\v\n\v\3\j\q\t\o\5\x\x\0\h\1\c\a\i\r\e\o\j\5\8\7\p\8\n\s\0\g\1\p\u\3\m\f\u\4\7\3\l\8\v\j\h\m\d\j\z\2\m\v\4\m\f\g\6\2\f\z\b\4\j\u\6\6\b\f\w\8\c\e\l\t\b\4\k\0\9\j\u\5\r\a\u\o\c\k\a\7\n\b\r\b\t\t\8\r\w\s\1\h\2\m\m\c\b\3\w\o\v\c\9\l\u\i\d\u\v\5\g\j\f\4\a\c\m\z\q\7\x\4\s\n\z\i\2\x\5\a\5\9\8\f\5\x\z\p\q\5\5\q\k\w\z\3\1\6\t\z\f\1\s\u\v\l\8\r\d\y\w\2\u\z\v\v\r\1\p\q\e\7\b\p\v\9\a\0\7\2\j\f\a\n\0\q\i\v\c\j\o\2\a\5\e\l\b\z\5\1\j\5\b\i\o\o\b\z\e\5\m\j\v\e\y\4\l\0\2\9\2\y\6\a\o\q\p\o\7\i\w\0\c\2\g\5\1\6\s\f\d\z\b\9\f\0\w\p\q\o\y\e\r\y\n\9\5\f\9\s\x\c\e\7\4\2\x\6\q\l\h\6\h\3\n\f\y\k\c\x\m\w\v\7\5\e\7\h\p\x\6\m\b\l\n\s\v\7\7\r\q\q\4\q\m\z\d\w\a\r\w\w\a\q\d\a\j\b\f\0\y\z\5\r\l\a\0\x\b\3\2\d\2\b\f\z\n\i\h\y\l\u\i\2\o\z\d\z\a\t\z\f\i\5\j\3\5\1\n\q\3\l\6\1\m\n\h\n\s\3\j\3\0\o\d\k\b\r\q\5\w\n\q\v\9\8\s\b\0\1\v\p\1\9\c\j\b\c\5\9\a\n\a\9\l\y\5\0\j\t\a\d\r\o\g\g\h\6\5\a\d\8\v\f\d\7\i\f\s\h\w\a\t\e\m\n\9\6\3\4\i\8\i\u\a\c\7\v\8\w\l\1\x\z\k\9\7\o\u\4\q\9\1\z\2\s\2\5\4\8\y\e\9\z\o\1\n\k\c\a\b\s\n\i\3\o\q\c\a\u\g\a\t\x\4\7\q\5\6\7\6\k\f\o\8\l\t\2\j\3\v\a\z\n\f\f\h\7\9\p\u\8\z\r\s\q\3\x\9\q\l\u\g\u\q\4\2\4\q\p\5\g\5\e\h\5\o\z\g\l\j\i\r\k\p\g\4\9\y\d\k\d\0\q\b\m\w\0\u\5\2\v\0\o\h\5\g\h\p\4\y\5\1\9\x\6\d\b\8\9\5\b\b\z\c\m\r\j\q\r\a\7\g\e\g\1\a\b\u\t\6\v\x\r\y\g\j\k\w\f\h\a\1\m\u\5\5\y\w\a\b\3\h\y\1\7\i\y\b\3\i\b\a\z\p\5\t\w\r\0\x\h\v\t\z\g\t\e\4\5\0\o\h\d\g\q\s\0\7\1\e\0\1\k\c\a\h\p\w\8\5\x\e\p\i\e\c\e\5\v\c\t\p\m\z\t\i\o\d\d\n\r\j\e\c\q\f\f\v\g\e\l\m\m\3\c\g\d\2\6\v\7\7\l\s\1\n\5\v\u\9\1\b\m\w\2\v\z\f\u\b\e\n\4\6\k\x\1\s\w\g\5\h\d\2\1\u\3\c\s\1\b\s\8\w\2\t\0\i\9\w\3\j\s\j\9\k\e\d\7\f\b\5\l\e\c\c\y\v\3\s\u\f\y\8\7\z\w\j\a\b\g\3\i\j\7\o\1\5\5\a\7\m\o\5\d\1\t\x\a\l\2\j\i\7\2\0\u\t\o\w\8\o\e\c\g\1\2\h\s\5\6\b\s\7\3\y\j\p\z\2\h\m\6\v\1\r\d\e\c\1\o\9\x\t\q\y\8\l\r\v\p\5\r\l\h\z\r\7\5\9\n\e\n\s\z\v\5\w\j\0\k\r\a\8\y\h ]] 00:27:37.308 00:27:37.308 real 0m3.282s 00:27:37.308 user 0m2.624s 00:27:37.308 sys 0m0.532s 00:27:37.308 17:06:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:37.308 17:06:26 -- common/autotest_common.sh@10 -- # set +x 00:27:37.308 17:06:26 -- dd/basic_rw.sh@1 -- # cleanup 00:27:37.308 17:06:26 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:27:37.308 17:06:26 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:37.308 17:06:26 -- dd/common.sh@11 -- # local nvme_ref= 00:27:37.308 17:06:26 -- dd/common.sh@12 -- # local size=0xffff 00:27:37.308 17:06:26 -- dd/common.sh@14 -- # local bs=1048576 00:27:37.308 17:06:26 -- dd/common.sh@15 -- # local count=1 00:27:37.308 17:06:26 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:37.308 17:06:26 -- dd/common.sh@18 -- # gen_conf 00:27:37.308 17:06:26 -- dd/common.sh@31 -- # xtrace_disable 00:27:37.308 17:06:26 -- common/autotest_common.sh@10 -- # set +x 00:27:37.567 { 00:27:37.567 "subsystems": [ 00:27:37.567 { 00:27:37.567 "subsystem": "bdev", 00:27:37.567 "config": [ 00:27:37.567 { 00:27:37.567 "params": { 00:27:37.567 "trtype": "pcie", 00:27:37.567 "traddr": "0000:00:06.0", 00:27:37.567 "name": "Nvme0" 00:27:37.567 }, 00:27:37.567 "method": "bdev_nvme_attach_controller" 00:27:37.567 }, 00:27:37.567 { 00:27:37.567 "method": "bdev_wait_for_examine" 00:27:37.567 } 00:27:37.567 ] 00:27:37.567 } 00:27:37.567 ] 00:27:37.567 } 00:27:37.567 [2024-11-05 17:06:26.214548] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:37.567 [2024-11-05 17:06:26.214736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134136 ] 00:27:37.567 [2024-11-05 17:06:26.382677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.825 [2024-11-05 17:06:26.545306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.082  [2024-11-05T17:06:27.893Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:39.016 00:27:39.016 17:06:27 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:39.016 00:27:39.016 real 0m39.377s 00:27:39.016 user 0m32.128s 00:27:39.016 sys 0m5.638s 00:27:39.016 17:06:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:39.016 17:06:27 -- common/autotest_common.sh@10 -- # set +x 00:27:39.016 ************************************ 00:27:39.016 END TEST spdk_dd_basic_rw 00:27:39.016 ************************************ 00:27:39.016 17:06:27 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:27:39.016 17:06:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:39.016 17:06:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:39.016 17:06:27 -- common/autotest_common.sh@10 -- # set +x 00:27:39.016 ************************************ 00:27:39.016 START TEST spdk_dd_posix 00:27:39.016 ************************************ 00:27:39.016 17:06:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:27:39.274 * Looking for test storage... 00:27:39.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:39.274 17:06:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:39.274 17:06:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:39.274 17:06:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:39.275 17:06:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:39.275 17:06:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:39.275 17:06:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:39.275 17:06:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:39.275 17:06:28 -- scripts/common.sh@335 -- # IFS=.-: 00:27:39.275 17:06:28 -- scripts/common.sh@335 -- # read -ra ver1 00:27:39.275 17:06:28 -- scripts/common.sh@336 -- # IFS=.-: 00:27:39.275 17:06:28 -- scripts/common.sh@336 -- # read -ra ver2 00:27:39.275 17:06:28 -- scripts/common.sh@337 -- # local 'op=<' 00:27:39.275 17:06:28 -- scripts/common.sh@339 -- # ver1_l=2 00:27:39.275 17:06:28 -- scripts/common.sh@340 -- # ver2_l=1 00:27:39.275 17:06:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:39.275 17:06:28 -- scripts/common.sh@343 -- # case "$op" in 00:27:39.275 17:06:28 -- scripts/common.sh@344 -- # : 1 00:27:39.275 17:06:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:39.275 17:06:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:39.275 17:06:28 -- scripts/common.sh@364 -- # decimal 1 00:27:39.275 17:06:28 -- scripts/common.sh@352 -- # local d=1 00:27:39.275 17:06:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:39.275 17:06:28 -- scripts/common.sh@354 -- # echo 1 00:27:39.275 17:06:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:39.275 17:06:28 -- scripts/common.sh@365 -- # decimal 2 00:27:39.275 17:06:28 -- scripts/common.sh@352 -- # local d=2 00:27:39.275 17:06:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:39.275 17:06:28 -- scripts/common.sh@354 -- # echo 2 00:27:39.275 17:06:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:39.275 17:06:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:39.275 17:06:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:39.275 17:06:28 -- scripts/common.sh@367 -- # return 0 00:27:39.275 17:06:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:39.275 17:06:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:39.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.275 --rc genhtml_branch_coverage=1 00:27:39.275 --rc genhtml_function_coverage=1 00:27:39.275 --rc genhtml_legend=1 00:27:39.275 --rc geninfo_all_blocks=1 00:27:39.275 --rc geninfo_unexecuted_blocks=1 00:27:39.275 00:27:39.275 ' 00:27:39.275 17:06:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:39.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.275 --rc genhtml_branch_coverage=1 00:27:39.275 --rc genhtml_function_coverage=1 00:27:39.275 --rc genhtml_legend=1 00:27:39.275 --rc geninfo_all_blocks=1 00:27:39.275 --rc geninfo_unexecuted_blocks=1 00:27:39.275 00:27:39.275 ' 00:27:39.275 17:06:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:39.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.275 --rc genhtml_branch_coverage=1 00:27:39.275 --rc genhtml_function_coverage=1 00:27:39.275 --rc genhtml_legend=1 00:27:39.275 --rc geninfo_all_blocks=1 00:27:39.275 --rc geninfo_unexecuted_blocks=1 00:27:39.275 00:27:39.275 ' 00:27:39.275 17:06:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:39.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.275 --rc genhtml_branch_coverage=1 00:27:39.275 --rc genhtml_function_coverage=1 00:27:39.275 --rc genhtml_legend=1 00:27:39.275 --rc geninfo_all_blocks=1 00:27:39.275 --rc geninfo_unexecuted_blocks=1 00:27:39.275 00:27:39.275 ' 00:27:39.275 17:06:28 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:39.275 17:06:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.275 17:06:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.275 17:06:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.275 17:06:28 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:39.275 17:06:28 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:39.275 17:06:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:39.275 17:06:28 -- paths/export.sh@5 -- # export PATH 00:27:39.275 17:06:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:39.275 17:06:28 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:27:39.275 17:06:28 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:27:39.275 17:06:28 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:27:39.275 17:06:28 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:27:39.275 17:06:28 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:39.275 17:06:28 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:39.275 17:06:28 -- dd/posix.sh@130 -- # tests 00:27:39.275 17:06:28 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:27:39.275 * First test run, using AIO 00:27:39.275 17:06:28 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:27:39.275 17:06:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:39.275 17:06:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:39.275 17:06:28 -- common/autotest_common.sh@10 -- # set +x 00:27:39.275 ************************************ 00:27:39.275 START TEST dd_flag_append 00:27:39.275 ************************************ 00:27:39.275 17:06:28 -- common/autotest_common.sh@1114 -- # append 00:27:39.275 17:06:28 -- dd/posix.sh@16 -- # local dump0 00:27:39.275 17:06:28 -- dd/posix.sh@17 -- # local dump1 00:27:39.275 17:06:28 -- dd/posix.sh@19 -- # gen_bytes 32 00:27:39.275 17:06:28 -- dd/common.sh@98 -- # xtrace_disable 00:27:39.275 17:06:28 -- common/autotest_common.sh@10 -- # set +x 00:27:39.275 17:06:28 -- dd/posix.sh@19 -- # dump0=s9k5hh16ec4z2vgsomhwa4uiek9i16pn 00:27:39.275 17:06:28 -- dd/posix.sh@20 -- # gen_bytes 32 00:27:39.275 17:06:28 -- dd/common.sh@98 -- # xtrace_disable 00:27:39.275 17:06:28 -- common/autotest_common.sh@10 -- # set +x 00:27:39.275 17:06:28 -- dd/posix.sh@20 -- # dump1=2eosyk3myi2evi6vdh1yl9cko7i1tx9y 00:27:39.275 17:06:28 -- dd/posix.sh@22 -- # printf %s s9k5hh16ec4z2vgsomhwa4uiek9i16pn 00:27:39.275 17:06:28 -- dd/posix.sh@23 -- # printf %s 2eosyk3myi2evi6vdh1yl9cko7i1tx9y 00:27:39.275 17:06:28 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:27:39.275 [2024-11-05 17:06:28.120312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:39.275 [2024-11-05 17:06:28.120494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134225 ] 00:27:39.533 [2024-11-05 17:06:28.274180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.791 [2024-11-05 17:06:28.433676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.791  [2024-11-05T17:06:29.601Z] Copying: 32/32 [B] (average 31 kBps) 00:27:40.724 00:27:40.982 17:06:29 -- dd/posix.sh@27 -- # [[ 2eosyk3myi2evi6vdh1yl9cko7i1tx9ys9k5hh16ec4z2vgsomhwa4uiek9i16pn == \2\e\o\s\y\k\3\m\y\i\2\e\v\i\6\v\d\h\1\y\l\9\c\k\o\7\i\1\t\x\9\y\s\9\k\5\h\h\1\6\e\c\4\z\2\v\g\s\o\m\h\w\a\4\u\i\e\k\9\i\1\6\p\n ]] 00:27:40.982 00:27:40.982 real 0m1.567s 00:27:40.982 user 0m1.223s 00:27:40.982 sys 0m0.215s 00:27:40.982 17:06:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:40.982 17:06:29 -- common/autotest_common.sh@10 -- # set +x 00:27:40.982 ************************************ 00:27:40.982 END TEST dd_flag_append 00:27:40.982 ************************************ 00:27:40.982 17:06:29 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:27:40.982 17:06:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:40.982 17:06:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:40.982 17:06:29 -- common/autotest_common.sh@10 -- # set +x 00:27:40.982 ************************************ 00:27:40.982 START TEST dd_flag_directory 00:27:40.982 ************************************ 00:27:40.982 17:06:29 -- common/autotest_common.sh@1114 -- # directory 00:27:40.982 17:06:29 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:40.982 17:06:29 -- common/autotest_common.sh@650 -- # local es=0 00:27:40.982 17:06:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:40.982 17:06:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:40.982 17:06:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:40.982 17:06:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:40.982 17:06:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:40.982 17:06:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:40.982 17:06:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:40.982 17:06:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:40.982 17:06:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:40.982 17:06:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:40.982 [2024-11-05 17:06:29.748378] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:40.982 [2024-11-05 17:06:29.748591] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134268 ] 00:27:41.240 [2024-11-05 17:06:29.918290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.240 [2024-11-05 17:06:30.095277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.498 [2024-11-05 17:06:30.341667] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:41.498 [2024-11-05 17:06:30.341756] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:41.498 [2024-11-05 17:06:30.341800] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:42.064 [2024-11-05 17:06:30.919960] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:42.630 17:06:31 -- common/autotest_common.sh@653 -- # es=236 00:27:42.630 17:06:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:42.630 17:06:31 -- common/autotest_common.sh@662 -- # es=108 00:27:42.630 17:06:31 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:42.630 17:06:31 -- common/autotest_common.sh@670 -- # es=1 00:27:42.630 17:06:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:42.630 17:06:31 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:42.630 17:06:31 -- common/autotest_common.sh@650 -- # local es=0 00:27:42.630 17:06:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:42.630 17:06:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.630 17:06:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:42.630 17:06:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.630 17:06:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:42.630 17:06:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.630 17:06:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:42.630 17:06:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.630 17:06:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:42.630 17:06:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:42.630 [2024-11-05 17:06:31.322046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:42.630 [2024-11-05 17:06:31.322241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134296 ] 00:27:42.630 [2024-11-05 17:06:31.489806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.888 [2024-11-05 17:06:31.651731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.146 [2024-11-05 17:06:31.903833] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:43.146 [2024-11-05 17:06:31.903906] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:43.146 [2024-11-05 17:06:31.903947] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:43.711 [2024-11-05 17:06:32.480897] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:43.969 17:06:32 -- common/autotest_common.sh@653 -- # es=236 00:27:43.969 17:06:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:43.969 17:06:32 -- common/autotest_common.sh@662 -- # es=108 00:27:43.969 17:06:32 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:43.969 17:06:32 -- common/autotest_common.sh@670 -- # es=1 00:27:43.969 17:06:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:43.969 00:27:43.969 real 0m3.128s 00:27:43.969 user 0m2.450s 00:27:43.969 sys 0m0.476s 00:27:43.969 17:06:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:43.969 17:06:32 -- common/autotest_common.sh@10 -- # set +x 00:27:43.969 ************************************ 00:27:43.969 END TEST dd_flag_directory 00:27:43.969 ************************************ 00:27:43.969 17:06:32 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:27:43.969 17:06:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:43.969 17:06:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:43.969 17:06:32 -- common/autotest_common.sh@10 -- # set +x 00:27:43.969 ************************************ 00:27:43.969 START TEST dd_flag_nofollow 00:27:43.969 ************************************ 00:27:43.969 17:06:32 -- common/autotest_common.sh@1114 -- # nofollow 00:27:43.969 17:06:32 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:43.969 17:06:32 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:43.969 17:06:32 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:43.969 17:06:32 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:44.227 17:06:32 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:44.227 17:06:32 -- common/autotest_common.sh@650 -- # local es=0 00:27:44.227 17:06:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:44.227 17:06:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:44.227 17:06:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:44.227 17:06:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:44.227 17:06:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:44.227 17:06:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:44.227 17:06:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:44.227 17:06:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:44.227 17:06:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:44.228 17:06:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:44.228 [2024-11-05 17:06:32.939517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:44.228 [2024-11-05 17:06:32.939724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134336 ] 00:27:44.228 [2024-11-05 17:06:33.109496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.486 [2024-11-05 17:06:33.268900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.744 [2024-11-05 17:06:33.515964] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:44.744 [2024-11-05 17:06:33.516053] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:44.744 [2024-11-05 17:06:33.516096] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:45.309 [2024-11-05 17:06:34.092941] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:45.567 17:06:34 -- common/autotest_common.sh@653 -- # es=216 00:27:45.567 17:06:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:45.567 17:06:34 -- common/autotest_common.sh@662 -- # es=88 00:27:45.567 17:06:34 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:45.567 17:06:34 -- common/autotest_common.sh@670 -- # es=1 00:27:45.567 17:06:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:45.567 17:06:34 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:45.567 17:06:34 -- common/autotest_common.sh@650 -- # local es=0 00:27:45.567 17:06:34 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:45.567 17:06:34 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:45.567 17:06:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:45.567 17:06:34 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:45.567 17:06:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:45.567 17:06:34 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:45.567 17:06:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:45.567 17:06:34 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:45.567 17:06:34 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:45.567 17:06:34 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:45.825 [2024-11-05 17:06:34.483321] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:45.825 [2024-11-05 17:06:34.483537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134369 ] 00:27:45.825 [2024-11-05 17:06:34.650457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.083 [2024-11-05 17:06:34.813367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.341 [2024-11-05 17:06:35.064169] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:46.341 [2024-11-05 17:06:35.064236] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:46.341 [2024-11-05 17:06:35.064283] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:46.907 [2024-11-05 17:06:35.643160] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:47.164 17:06:35 -- common/autotest_common.sh@653 -- # es=216 00:27:47.164 17:06:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:47.164 17:06:35 -- common/autotest_common.sh@662 -- # es=88 00:27:47.164 17:06:35 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:47.164 17:06:35 -- common/autotest_common.sh@670 -- # es=1 00:27:47.164 17:06:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:47.164 17:06:35 -- dd/posix.sh@46 -- # gen_bytes 512 00:27:47.164 17:06:35 -- dd/common.sh@98 -- # xtrace_disable 00:27:47.165 17:06:35 -- common/autotest_common.sh@10 -- # set +x 00:27:47.165 17:06:35 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:47.165 [2024-11-05 17:06:36.022315] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:47.165 [2024-11-05 17:06:36.022479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134384 ] 00:27:47.423 [2024-11-05 17:06:36.174360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.680 [2024-11-05 17:06:36.330410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.680  [2024-11-05T17:06:37.492Z] Copying: 512/512 [B] (average 500 kBps) 00:27:48.615 00:27:48.873 17:06:37 -- dd/posix.sh@49 -- # [[ c92ec0n5ipcyspfo27fcxkg2g3ek1cn6ack08kp5p5t6tjh6t9jc3icrwm7efxk68ibzbq4gwjsx7z38o6fg3i77f4e79s0om6xc9vy4z68oyol3mix7a52dilnmhl7jbuxgrnk94stezqzkxqlraga1x1rcwm7ka5b70on1mu2i9zxq4sem9oj00nc4mev2yeci1thsgb14wj6s0vq87fz0sjeq6v664420sfvocft75fwdzzkzcp7ekauf2mew79ig2hsqpb66m47u6qv5a296r4oi82vh9ml8s7tpcwlhaisc56mo9gz4ld9vm7a66nw0ofc9l73utxhjxkp3ezj33jgt4o45cl740szbubc7f3r5qy2816t97wvvdge1txoiknjvg6ja7cb5p84y0w04vhf6o81l9ifos90vn2dud9xo0d7ykwv92w19wqqckbzwwa7the69h3bv1fh53i6z9pemjkof4u8o0hkl8j4e2nyioq93fkv2hb7r4twu == \c\9\2\e\c\0\n\5\i\p\c\y\s\p\f\o\2\7\f\c\x\k\g\2\g\3\e\k\1\c\n\6\a\c\k\0\8\k\p\5\p\5\t\6\t\j\h\6\t\9\j\c\3\i\c\r\w\m\7\e\f\x\k\6\8\i\b\z\b\q\4\g\w\j\s\x\7\z\3\8\o\6\f\g\3\i\7\7\f\4\e\7\9\s\0\o\m\6\x\c\9\v\y\4\z\6\8\o\y\o\l\3\m\i\x\7\a\5\2\d\i\l\n\m\h\l\7\j\b\u\x\g\r\n\k\9\4\s\t\e\z\q\z\k\x\q\l\r\a\g\a\1\x\1\r\c\w\m\7\k\a\5\b\7\0\o\n\1\m\u\2\i\9\z\x\q\4\s\e\m\9\o\j\0\0\n\c\4\m\e\v\2\y\e\c\i\1\t\h\s\g\b\1\4\w\j\6\s\0\v\q\8\7\f\z\0\s\j\e\q\6\v\6\6\4\4\2\0\s\f\v\o\c\f\t\7\5\f\w\d\z\z\k\z\c\p\7\e\k\a\u\f\2\m\e\w\7\9\i\g\2\h\s\q\p\b\6\6\m\4\7\u\6\q\v\5\a\2\9\6\r\4\o\i\8\2\v\h\9\m\l\8\s\7\t\p\c\w\l\h\a\i\s\c\5\6\m\o\9\g\z\4\l\d\9\v\m\7\a\6\6\n\w\0\o\f\c\9\l\7\3\u\t\x\h\j\x\k\p\3\e\z\j\3\3\j\g\t\4\o\4\5\c\l\7\4\0\s\z\b\u\b\c\7\f\3\r\5\q\y\2\8\1\6\t\9\7\w\v\v\d\g\e\1\t\x\o\i\k\n\j\v\g\6\j\a\7\c\b\5\p\8\4\y\0\w\0\4\v\h\f\6\o\8\1\l\9\i\f\o\s\9\0\v\n\2\d\u\d\9\x\o\0\d\7\y\k\w\v\9\2\w\1\9\w\q\q\c\k\b\z\w\w\a\7\t\h\e\6\9\h\3\b\v\1\f\h\5\3\i\6\z\9\p\e\m\j\k\o\f\4\u\8\o\0\h\k\l\8\j\4\e\2\n\y\i\o\q\9\3\f\k\v\2\h\b\7\r\4\t\w\u ]] 00:27:48.873 00:27:48.873 real 0m4.663s 00:27:48.873 user 0m3.656s 00:27:48.873 sys 0m0.674s 00:27:48.873 17:06:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:48.873 17:06:37 -- common/autotest_common.sh@10 -- # set +x 00:27:48.873 ************************************ 00:27:48.873 END TEST dd_flag_nofollow 00:27:48.873 ************************************ 00:27:48.873 17:06:37 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:27:48.873 17:06:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:48.874 17:06:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:48.874 17:06:37 -- common/autotest_common.sh@10 -- # set +x 00:27:48.874 ************************************ 00:27:48.874 START TEST dd_flag_noatime 00:27:48.874 ************************************ 00:27:48.874 17:06:37 -- common/autotest_common.sh@1114 -- # noatime 00:27:48.874 17:06:37 -- dd/posix.sh@53 -- # local atime_if 00:27:48.874 17:06:37 -- dd/posix.sh@54 -- # local atime_of 00:27:48.874 17:06:37 -- dd/posix.sh@58 -- # gen_bytes 512 00:27:48.874 17:06:37 -- dd/common.sh@98 -- # xtrace_disable 00:27:48.874 17:06:37 -- common/autotest_common.sh@10 -- # set +x 00:27:48.874 17:06:37 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:48.874 17:06:37 -- dd/posix.sh@60 -- # atime_if=1730826396 00:27:48.874 17:06:37 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:48.874 17:06:37 -- dd/posix.sh@61 -- # atime_of=1730826397 00:27:48.874 17:06:37 -- dd/posix.sh@66 -- # sleep 1 00:27:49.807 17:06:38 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:49.807 [2024-11-05 17:06:38.656916] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:49.807 [2024-11-05 17:06:38.657094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134449 ] 00:27:50.065 [2024-11-05 17:06:38.810355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.323 [2024-11-05 17:06:38.974015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.581  [2024-11-05T17:06:40.392Z] Copying: 512/512 [B] (average 500 kBps) 00:27:51.515 00:27:51.515 17:06:40 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:51.515 17:06:40 -- dd/posix.sh@69 -- # (( atime_if == 1730826396 )) 00:27:51.515 17:06:40 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:51.515 17:06:40 -- dd/posix.sh@70 -- # (( atime_of == 1730826397 )) 00:27:51.515 17:06:40 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:51.515 [2024-11-05 17:06:40.231666] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:51.515 [2024-11-05 17:06:40.231817] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134475 ] 00:27:51.515 [2024-11-05 17:06:40.383019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.790 [2024-11-05 17:06:40.540416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.062  [2024-11-05T17:06:41.873Z] Copying: 512/512 [B] (average 500 kBps) 00:27:52.996 00:27:52.996 17:06:41 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:52.996 17:06:41 -- dd/posix.sh@73 -- # (( atime_if < 1730826400 )) 00:27:52.996 00:27:52.996 real 0m4.157s 00:27:52.996 user 0m2.475s 00:27:52.996 sys 0m0.424s 00:27:52.996 17:06:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:52.996 17:06:41 -- common/autotest_common.sh@10 -- # set +x 00:27:52.996 ************************************ 00:27:52.996 END TEST dd_flag_noatime 00:27:52.996 ************************************ 00:27:52.996 17:06:41 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:27:52.996 17:06:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:52.996 17:06:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:52.996 17:06:41 -- common/autotest_common.sh@10 -- # set +x 00:27:52.996 ************************************ 00:27:52.996 START TEST dd_flags_misc 00:27:52.996 ************************************ 00:27:52.996 17:06:41 -- common/autotest_common.sh@1114 -- # io 00:27:52.996 17:06:41 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:27:52.996 17:06:41 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:27:52.996 17:06:41 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:27:52.996 17:06:41 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:52.996 17:06:41 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:52.996 17:06:41 -- dd/common.sh@98 -- # xtrace_disable 00:27:52.996 17:06:41 -- common/autotest_common.sh@10 -- # set +x 00:27:52.996 17:06:41 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:52.996 17:06:41 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:52.996 [2024-11-05 17:06:41.864442] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:52.996 [2024-11-05 17:06:41.865164] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134516 ] 00:27:53.255 [2024-11-05 17:06:42.033342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.513 [2024-11-05 17:06:42.199393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.771  [2024-11-05T17:06:43.583Z] Copying: 512/512 [B] (average 500 kBps) 00:27:54.706 00:27:54.706 17:06:43 -- dd/posix.sh@93 -- # [[ 8y5kvkg822om5ulkjk7hyr232p1fr7anats1yn671lmi8dnc4eivpimtrin2j60ampjxgv2apcwvalkl2zazoa9gunrklz69p0tzo76yv3l1x50fq2qhpzh03z8rilqzba4r8tlp187gp82b25vbaku04kfa6op9u5rkle0m70z7ciep93j0abw7gjm6pzhuaga9nlb2n1iz5ugu6c6dopaq9qga5ocwxtzam29auzlfpg2f3k04zb0p10o7oqre45ota60p2eldmxh928vux6el67jtuh8c6te0kq33hjdkhcsczkhm92e521immsnrmq10cklonttonq7gywzixh8r724c2ub7dhvingliyqnnfjjkvznvew0k9gxs43z5ju3vti2vmg4r0bpz82wb0ds439ybmfw4skq4afhy332m7038oy99e3q578io8tv9urvm49vjydhqsjof7uphnrgfslhnrz2t48ld7h2umy9ma394s7sn632v4e03srev == \8\y\5\k\v\k\g\8\2\2\o\m\5\u\l\k\j\k\7\h\y\r\2\3\2\p\1\f\r\7\a\n\a\t\s\1\y\n\6\7\1\l\m\i\8\d\n\c\4\e\i\v\p\i\m\t\r\i\n\2\j\6\0\a\m\p\j\x\g\v\2\a\p\c\w\v\a\l\k\l\2\z\a\z\o\a\9\g\u\n\r\k\l\z\6\9\p\0\t\z\o\7\6\y\v\3\l\1\x\5\0\f\q\2\q\h\p\z\h\0\3\z\8\r\i\l\q\z\b\a\4\r\8\t\l\p\1\8\7\g\p\8\2\b\2\5\v\b\a\k\u\0\4\k\f\a\6\o\p\9\u\5\r\k\l\e\0\m\7\0\z\7\c\i\e\p\9\3\j\0\a\b\w\7\g\j\m\6\p\z\h\u\a\g\a\9\n\l\b\2\n\1\i\z\5\u\g\u\6\c\6\d\o\p\a\q\9\q\g\a\5\o\c\w\x\t\z\a\m\2\9\a\u\z\l\f\p\g\2\f\3\k\0\4\z\b\0\p\1\0\o\7\o\q\r\e\4\5\o\t\a\6\0\p\2\e\l\d\m\x\h\9\2\8\v\u\x\6\e\l\6\7\j\t\u\h\8\c\6\t\e\0\k\q\3\3\h\j\d\k\h\c\s\c\z\k\h\m\9\2\e\5\2\1\i\m\m\s\n\r\m\q\1\0\c\k\l\o\n\t\t\o\n\q\7\g\y\w\z\i\x\h\8\r\7\2\4\c\2\u\b\7\d\h\v\i\n\g\l\i\y\q\n\n\f\j\j\k\v\z\n\v\e\w\0\k\9\g\x\s\4\3\z\5\j\u\3\v\t\i\2\v\m\g\4\r\0\b\p\z\8\2\w\b\0\d\s\4\3\9\y\b\m\f\w\4\s\k\q\4\a\f\h\y\3\3\2\m\7\0\3\8\o\y\9\9\e\3\q\5\7\8\i\o\8\t\v\9\u\r\v\m\4\9\v\j\y\d\h\q\s\j\o\f\7\u\p\h\n\r\g\f\s\l\h\n\r\z\2\t\4\8\l\d\7\h\2\u\m\y\9\m\a\3\9\4\s\7\s\n\6\3\2\v\4\e\0\3\s\r\e\v ]] 00:27:54.706 17:06:43 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:54.706 17:06:43 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:27:54.706 [2024-11-05 17:06:43.459699] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:54.706 [2024-11-05 17:06:43.459931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134546 ] 00:27:54.965 [2024-11-05 17:06:43.630303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.965 [2024-11-05 17:06:43.790475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.223  [2024-11-05T17:06:45.034Z] Copying: 512/512 [B] (average 500 kBps) 00:27:56.157 00:27:56.157 17:06:44 -- dd/posix.sh@93 -- # [[ 8y5kvkg822om5ulkjk7hyr232p1fr7anats1yn671lmi8dnc4eivpimtrin2j60ampjxgv2apcwvalkl2zazoa9gunrklz69p0tzo76yv3l1x50fq2qhpzh03z8rilqzba4r8tlp187gp82b25vbaku04kfa6op9u5rkle0m70z7ciep93j0abw7gjm6pzhuaga9nlb2n1iz5ugu6c6dopaq9qga5ocwxtzam29auzlfpg2f3k04zb0p10o7oqre45ota60p2eldmxh928vux6el67jtuh8c6te0kq33hjdkhcsczkhm92e521immsnrmq10cklonttonq7gywzixh8r724c2ub7dhvingliyqnnfjjkvznvew0k9gxs43z5ju3vti2vmg4r0bpz82wb0ds439ybmfw4skq4afhy332m7038oy99e3q578io8tv9urvm49vjydhqsjof7uphnrgfslhnrz2t48ld7h2umy9ma394s7sn632v4e03srev == \8\y\5\k\v\k\g\8\2\2\o\m\5\u\l\k\j\k\7\h\y\r\2\3\2\p\1\f\r\7\a\n\a\t\s\1\y\n\6\7\1\l\m\i\8\d\n\c\4\e\i\v\p\i\m\t\r\i\n\2\j\6\0\a\m\p\j\x\g\v\2\a\p\c\w\v\a\l\k\l\2\z\a\z\o\a\9\g\u\n\r\k\l\z\6\9\p\0\t\z\o\7\6\y\v\3\l\1\x\5\0\f\q\2\q\h\p\z\h\0\3\z\8\r\i\l\q\z\b\a\4\r\8\t\l\p\1\8\7\g\p\8\2\b\2\5\v\b\a\k\u\0\4\k\f\a\6\o\p\9\u\5\r\k\l\e\0\m\7\0\z\7\c\i\e\p\9\3\j\0\a\b\w\7\g\j\m\6\p\z\h\u\a\g\a\9\n\l\b\2\n\1\i\z\5\u\g\u\6\c\6\d\o\p\a\q\9\q\g\a\5\o\c\w\x\t\z\a\m\2\9\a\u\z\l\f\p\g\2\f\3\k\0\4\z\b\0\p\1\0\o\7\o\q\r\e\4\5\o\t\a\6\0\p\2\e\l\d\m\x\h\9\2\8\v\u\x\6\e\l\6\7\j\t\u\h\8\c\6\t\e\0\k\q\3\3\h\j\d\k\h\c\s\c\z\k\h\m\9\2\e\5\2\1\i\m\m\s\n\r\m\q\1\0\c\k\l\o\n\t\t\o\n\q\7\g\y\w\z\i\x\h\8\r\7\2\4\c\2\u\b\7\d\h\v\i\n\g\l\i\y\q\n\n\f\j\j\k\v\z\n\v\e\w\0\k\9\g\x\s\4\3\z\5\j\u\3\v\t\i\2\v\m\g\4\r\0\b\p\z\8\2\w\b\0\d\s\4\3\9\y\b\m\f\w\4\s\k\q\4\a\f\h\y\3\3\2\m\7\0\3\8\o\y\9\9\e\3\q\5\7\8\i\o\8\t\v\9\u\r\v\m\4\9\v\j\y\d\h\q\s\j\o\f\7\u\p\h\n\r\g\f\s\l\h\n\r\z\2\t\4\8\l\d\7\h\2\u\m\y\9\m\a\3\9\4\s\7\s\n\6\3\2\v\4\e\0\3\s\r\e\v ]] 00:27:56.157 17:06:44 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:56.157 17:06:44 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:27:56.157 [2024-11-05 17:06:45.045298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:56.157 [2024-11-05 17:06:45.046340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134563 ] 00:27:56.415 [2024-11-05 17:06:45.214038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.674 [2024-11-05 17:06:45.372758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.931  [2024-11-05T17:06:46.742Z] Copying: 512/512 [B] (average 166 kBps) 00:27:57.865 00:27:57.865 17:06:46 -- dd/posix.sh@93 -- # [[ 8y5kvkg822om5ulkjk7hyr232p1fr7anats1yn671lmi8dnc4eivpimtrin2j60ampjxgv2apcwvalkl2zazoa9gunrklz69p0tzo76yv3l1x50fq2qhpzh03z8rilqzba4r8tlp187gp82b25vbaku04kfa6op9u5rkle0m70z7ciep93j0abw7gjm6pzhuaga9nlb2n1iz5ugu6c6dopaq9qga5ocwxtzam29auzlfpg2f3k04zb0p10o7oqre45ota60p2eldmxh928vux6el67jtuh8c6te0kq33hjdkhcsczkhm92e521immsnrmq10cklonttonq7gywzixh8r724c2ub7dhvingliyqnnfjjkvznvew0k9gxs43z5ju3vti2vmg4r0bpz82wb0ds439ybmfw4skq4afhy332m7038oy99e3q578io8tv9urvm49vjydhqsjof7uphnrgfslhnrz2t48ld7h2umy9ma394s7sn632v4e03srev == \8\y\5\k\v\k\g\8\2\2\o\m\5\u\l\k\j\k\7\h\y\r\2\3\2\p\1\f\r\7\a\n\a\t\s\1\y\n\6\7\1\l\m\i\8\d\n\c\4\e\i\v\p\i\m\t\r\i\n\2\j\6\0\a\m\p\j\x\g\v\2\a\p\c\w\v\a\l\k\l\2\z\a\z\o\a\9\g\u\n\r\k\l\z\6\9\p\0\t\z\o\7\6\y\v\3\l\1\x\5\0\f\q\2\q\h\p\z\h\0\3\z\8\r\i\l\q\z\b\a\4\r\8\t\l\p\1\8\7\g\p\8\2\b\2\5\v\b\a\k\u\0\4\k\f\a\6\o\p\9\u\5\r\k\l\e\0\m\7\0\z\7\c\i\e\p\9\3\j\0\a\b\w\7\g\j\m\6\p\z\h\u\a\g\a\9\n\l\b\2\n\1\i\z\5\u\g\u\6\c\6\d\o\p\a\q\9\q\g\a\5\o\c\w\x\t\z\a\m\2\9\a\u\z\l\f\p\g\2\f\3\k\0\4\z\b\0\p\1\0\o\7\o\q\r\e\4\5\o\t\a\6\0\p\2\e\l\d\m\x\h\9\2\8\v\u\x\6\e\l\6\7\j\t\u\h\8\c\6\t\e\0\k\q\3\3\h\j\d\k\h\c\s\c\z\k\h\m\9\2\e\5\2\1\i\m\m\s\n\r\m\q\1\0\c\k\l\o\n\t\t\o\n\q\7\g\y\w\z\i\x\h\8\r\7\2\4\c\2\u\b\7\d\h\v\i\n\g\l\i\y\q\n\n\f\j\j\k\v\z\n\v\e\w\0\k\9\g\x\s\4\3\z\5\j\u\3\v\t\i\2\v\m\g\4\r\0\b\p\z\8\2\w\b\0\d\s\4\3\9\y\b\m\f\w\4\s\k\q\4\a\f\h\y\3\3\2\m\7\0\3\8\o\y\9\9\e\3\q\5\7\8\i\o\8\t\v\9\u\r\v\m\4\9\v\j\y\d\h\q\s\j\o\f\7\u\p\h\n\r\g\f\s\l\h\n\r\z\2\t\4\8\l\d\7\h\2\u\m\y\9\m\a\3\9\4\s\7\s\n\6\3\2\v\4\e\0\3\s\r\e\v ]] 00:27:57.865 17:06:46 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:57.865 17:06:46 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:27:57.865 [2024-11-05 17:06:46.627582] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:57.865 [2024-11-05 17:06:46.628573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134592 ] 00:27:58.123 [2024-11-05 17:06:46.797492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.123 [2024-11-05 17:06:46.959432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.381  [2024-11-05T17:06:48.193Z] Copying: 512/512 [B] (average 166 kBps) 00:27:59.316 00:27:59.316 17:06:48 -- dd/posix.sh@93 -- # [[ 8y5kvkg822om5ulkjk7hyr232p1fr7anats1yn671lmi8dnc4eivpimtrin2j60ampjxgv2apcwvalkl2zazoa9gunrklz69p0tzo76yv3l1x50fq2qhpzh03z8rilqzba4r8tlp187gp82b25vbaku04kfa6op9u5rkle0m70z7ciep93j0abw7gjm6pzhuaga9nlb2n1iz5ugu6c6dopaq9qga5ocwxtzam29auzlfpg2f3k04zb0p10o7oqre45ota60p2eldmxh928vux6el67jtuh8c6te0kq33hjdkhcsczkhm92e521immsnrmq10cklonttonq7gywzixh8r724c2ub7dhvingliyqnnfjjkvznvew0k9gxs43z5ju3vti2vmg4r0bpz82wb0ds439ybmfw4skq4afhy332m7038oy99e3q578io8tv9urvm49vjydhqsjof7uphnrgfslhnrz2t48ld7h2umy9ma394s7sn632v4e03srev == \8\y\5\k\v\k\g\8\2\2\o\m\5\u\l\k\j\k\7\h\y\r\2\3\2\p\1\f\r\7\a\n\a\t\s\1\y\n\6\7\1\l\m\i\8\d\n\c\4\e\i\v\p\i\m\t\r\i\n\2\j\6\0\a\m\p\j\x\g\v\2\a\p\c\w\v\a\l\k\l\2\z\a\z\o\a\9\g\u\n\r\k\l\z\6\9\p\0\t\z\o\7\6\y\v\3\l\1\x\5\0\f\q\2\q\h\p\z\h\0\3\z\8\r\i\l\q\z\b\a\4\r\8\t\l\p\1\8\7\g\p\8\2\b\2\5\v\b\a\k\u\0\4\k\f\a\6\o\p\9\u\5\r\k\l\e\0\m\7\0\z\7\c\i\e\p\9\3\j\0\a\b\w\7\g\j\m\6\p\z\h\u\a\g\a\9\n\l\b\2\n\1\i\z\5\u\g\u\6\c\6\d\o\p\a\q\9\q\g\a\5\o\c\w\x\t\z\a\m\2\9\a\u\z\l\f\p\g\2\f\3\k\0\4\z\b\0\p\1\0\o\7\o\q\r\e\4\5\o\t\a\6\0\p\2\e\l\d\m\x\h\9\2\8\v\u\x\6\e\l\6\7\j\t\u\h\8\c\6\t\e\0\k\q\3\3\h\j\d\k\h\c\s\c\z\k\h\m\9\2\e\5\2\1\i\m\m\s\n\r\m\q\1\0\c\k\l\o\n\t\t\o\n\q\7\g\y\w\z\i\x\h\8\r\7\2\4\c\2\u\b\7\d\h\v\i\n\g\l\i\y\q\n\n\f\j\j\k\v\z\n\v\e\w\0\k\9\g\x\s\4\3\z\5\j\u\3\v\t\i\2\v\m\g\4\r\0\b\p\z\8\2\w\b\0\d\s\4\3\9\y\b\m\f\w\4\s\k\q\4\a\f\h\y\3\3\2\m\7\0\3\8\o\y\9\9\e\3\q\5\7\8\i\o\8\t\v\9\u\r\v\m\4\9\v\j\y\d\h\q\s\j\o\f\7\u\p\h\n\r\g\f\s\l\h\n\r\z\2\t\4\8\l\d\7\h\2\u\m\y\9\m\a\3\9\4\s\7\s\n\6\3\2\v\4\e\0\3\s\r\e\v ]] 00:27:59.316 17:06:48 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:59.316 17:06:48 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:59.316 17:06:48 -- dd/common.sh@98 -- # xtrace_disable 00:27:59.316 17:06:48 -- common/autotest_common.sh@10 -- # set +x 00:27:59.316 17:06:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:59.316 17:06:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:59.574 [2024-11-05 17:06:48.226485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:59.574 [2024-11-05 17:06:48.226951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134609 ] 00:27:59.574 [2024-11-05 17:06:48.394240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.832 [2024-11-05 17:06:48.557511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.090  [2024-11-05T17:06:49.901Z] Copying: 512/512 [B] (average 500 kBps) 00:28:01.024 00:28:01.024 17:06:49 -- dd/posix.sh@93 -- # [[ d4kso0mik8qdclzjjjap6s2znti34xtm06bruemqqxahsfx8vga4gc0fcdh0ru0nvvv6uz2pq0bifyky9oq9shel2m1rqcjvmvqmtfciq5927p6z8s1sqehao7vtommp2zyu951oam0nqhgue4q5d2barsg60cf9nf355on2on20t9wcja12sru16fbbdojo5cypmt4k5qluadn3p4tznfo0qi3obbf4k4ls3w31kiu5wy5lqqpzd1f1wxi9t2nx9decbdqhqqkyzr25xdyfu72vgs3swmp3jyv1rzvotf9des5puewrzdg9dym2b96tlppsei2jzaydbgbnfvgmx6am683y6egq8hmfka4s803ryykigci3zhym0sr6hps36oukkw6k6fjs1s6yznr5t7gyphhiyr8nrmz12q555q0lpkjlw82soah8rzjb5py4w65mbjulokamvbqpwx3hnamgdui997hlqnegbyy5ytn7xrm72gv977vx9iondvus == \d\4\k\s\o\0\m\i\k\8\q\d\c\l\z\j\j\j\a\p\6\s\2\z\n\t\i\3\4\x\t\m\0\6\b\r\u\e\m\q\q\x\a\h\s\f\x\8\v\g\a\4\g\c\0\f\c\d\h\0\r\u\0\n\v\v\v\6\u\z\2\p\q\0\b\i\f\y\k\y\9\o\q\9\s\h\e\l\2\m\1\r\q\c\j\v\m\v\q\m\t\f\c\i\q\5\9\2\7\p\6\z\8\s\1\s\q\e\h\a\o\7\v\t\o\m\m\p\2\z\y\u\9\5\1\o\a\m\0\n\q\h\g\u\e\4\q\5\d\2\b\a\r\s\g\6\0\c\f\9\n\f\3\5\5\o\n\2\o\n\2\0\t\9\w\c\j\a\1\2\s\r\u\1\6\f\b\b\d\o\j\o\5\c\y\p\m\t\4\k\5\q\l\u\a\d\n\3\p\4\t\z\n\f\o\0\q\i\3\o\b\b\f\4\k\4\l\s\3\w\3\1\k\i\u\5\w\y\5\l\q\q\p\z\d\1\f\1\w\x\i\9\t\2\n\x\9\d\e\c\b\d\q\h\q\q\k\y\z\r\2\5\x\d\y\f\u\7\2\v\g\s\3\s\w\m\p\3\j\y\v\1\r\z\v\o\t\f\9\d\e\s\5\p\u\e\w\r\z\d\g\9\d\y\m\2\b\9\6\t\l\p\p\s\e\i\2\j\z\a\y\d\b\g\b\n\f\v\g\m\x\6\a\m\6\8\3\y\6\e\g\q\8\h\m\f\k\a\4\s\8\0\3\r\y\y\k\i\g\c\i\3\z\h\y\m\0\s\r\6\h\p\s\3\6\o\u\k\k\w\6\k\6\f\j\s\1\s\6\y\z\n\r\5\t\7\g\y\p\h\h\i\y\r\8\n\r\m\z\1\2\q\5\5\5\q\0\l\p\k\j\l\w\8\2\s\o\a\h\8\r\z\j\b\5\p\y\4\w\6\5\m\b\j\u\l\o\k\a\m\v\b\q\p\w\x\3\h\n\a\m\g\d\u\i\9\9\7\h\l\q\n\e\g\b\y\y\5\y\t\n\7\x\r\m\7\2\g\v\9\7\7\v\x\9\i\o\n\d\v\u\s ]] 00:28:01.024 17:06:49 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:01.024 17:06:49 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:01.024 [2024-11-05 17:06:49.817395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:01.024 [2024-11-05 17:06:49.817822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134633 ] 00:28:01.282 [2024-11-05 17:06:49.985492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.282 [2024-11-05 17:06:50.144506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.540  [2024-11-05T17:06:51.352Z] Copying: 512/512 [B] (average 500 kBps) 00:28:02.475 00:28:02.475 17:06:51 -- dd/posix.sh@93 -- # [[ d4kso0mik8qdclzjjjap6s2znti34xtm06bruemqqxahsfx8vga4gc0fcdh0ru0nvvv6uz2pq0bifyky9oq9shel2m1rqcjvmvqmtfciq5927p6z8s1sqehao7vtommp2zyu951oam0nqhgue4q5d2barsg60cf9nf355on2on20t9wcja12sru16fbbdojo5cypmt4k5qluadn3p4tznfo0qi3obbf4k4ls3w31kiu5wy5lqqpzd1f1wxi9t2nx9decbdqhqqkyzr25xdyfu72vgs3swmp3jyv1rzvotf9des5puewrzdg9dym2b96tlppsei2jzaydbgbnfvgmx6am683y6egq8hmfka4s803ryykigci3zhym0sr6hps36oukkw6k6fjs1s6yznr5t7gyphhiyr8nrmz12q555q0lpkjlw82soah8rzjb5py4w65mbjulokamvbqpwx3hnamgdui997hlqnegbyy5ytn7xrm72gv977vx9iondvus == \d\4\k\s\o\0\m\i\k\8\q\d\c\l\z\j\j\j\a\p\6\s\2\z\n\t\i\3\4\x\t\m\0\6\b\r\u\e\m\q\q\x\a\h\s\f\x\8\v\g\a\4\g\c\0\f\c\d\h\0\r\u\0\n\v\v\v\6\u\z\2\p\q\0\b\i\f\y\k\y\9\o\q\9\s\h\e\l\2\m\1\r\q\c\j\v\m\v\q\m\t\f\c\i\q\5\9\2\7\p\6\z\8\s\1\s\q\e\h\a\o\7\v\t\o\m\m\p\2\z\y\u\9\5\1\o\a\m\0\n\q\h\g\u\e\4\q\5\d\2\b\a\r\s\g\6\0\c\f\9\n\f\3\5\5\o\n\2\o\n\2\0\t\9\w\c\j\a\1\2\s\r\u\1\6\f\b\b\d\o\j\o\5\c\y\p\m\t\4\k\5\q\l\u\a\d\n\3\p\4\t\z\n\f\o\0\q\i\3\o\b\b\f\4\k\4\l\s\3\w\3\1\k\i\u\5\w\y\5\l\q\q\p\z\d\1\f\1\w\x\i\9\t\2\n\x\9\d\e\c\b\d\q\h\q\q\k\y\z\r\2\5\x\d\y\f\u\7\2\v\g\s\3\s\w\m\p\3\j\y\v\1\r\z\v\o\t\f\9\d\e\s\5\p\u\e\w\r\z\d\g\9\d\y\m\2\b\9\6\t\l\p\p\s\e\i\2\j\z\a\y\d\b\g\b\n\f\v\g\m\x\6\a\m\6\8\3\y\6\e\g\q\8\h\m\f\k\a\4\s\8\0\3\r\y\y\k\i\g\c\i\3\z\h\y\m\0\s\r\6\h\p\s\3\6\o\u\k\k\w\6\k\6\f\j\s\1\s\6\y\z\n\r\5\t\7\g\y\p\h\h\i\y\r\8\n\r\m\z\1\2\q\5\5\5\q\0\l\p\k\j\l\w\8\2\s\o\a\h\8\r\z\j\b\5\p\y\4\w\6\5\m\b\j\u\l\o\k\a\m\v\b\q\p\w\x\3\h\n\a\m\g\d\u\i\9\9\7\h\l\q\n\e\g\b\y\y\5\y\t\n\7\x\r\m\7\2\g\v\9\7\7\v\x\9\i\o\n\d\v\u\s ]] 00:28:02.475 17:06:51 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:02.475 17:06:51 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:02.733 [2024-11-05 17:06:51.402184] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:02.733 [2024-11-05 17:06:51.402667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134658 ] 00:28:02.733 [2024-11-05 17:06:51.573436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.991 [2024-11-05 17:06:51.734678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.250  [2024-11-05T17:06:53.061Z] Copying: 512/512 [B] (average 166 kBps) 00:28:04.184 00:28:04.184 17:06:52 -- dd/posix.sh@93 -- # [[ d4kso0mik8qdclzjjjap6s2znti34xtm06bruemqqxahsfx8vga4gc0fcdh0ru0nvvv6uz2pq0bifyky9oq9shel2m1rqcjvmvqmtfciq5927p6z8s1sqehao7vtommp2zyu951oam0nqhgue4q5d2barsg60cf9nf355on2on20t9wcja12sru16fbbdojo5cypmt4k5qluadn3p4tznfo0qi3obbf4k4ls3w31kiu5wy5lqqpzd1f1wxi9t2nx9decbdqhqqkyzr25xdyfu72vgs3swmp3jyv1rzvotf9des5puewrzdg9dym2b96tlppsei2jzaydbgbnfvgmx6am683y6egq8hmfka4s803ryykigci3zhym0sr6hps36oukkw6k6fjs1s6yznr5t7gyphhiyr8nrmz12q555q0lpkjlw82soah8rzjb5py4w65mbjulokamvbqpwx3hnamgdui997hlqnegbyy5ytn7xrm72gv977vx9iondvus == \d\4\k\s\o\0\m\i\k\8\q\d\c\l\z\j\j\j\a\p\6\s\2\z\n\t\i\3\4\x\t\m\0\6\b\r\u\e\m\q\q\x\a\h\s\f\x\8\v\g\a\4\g\c\0\f\c\d\h\0\r\u\0\n\v\v\v\6\u\z\2\p\q\0\b\i\f\y\k\y\9\o\q\9\s\h\e\l\2\m\1\r\q\c\j\v\m\v\q\m\t\f\c\i\q\5\9\2\7\p\6\z\8\s\1\s\q\e\h\a\o\7\v\t\o\m\m\p\2\z\y\u\9\5\1\o\a\m\0\n\q\h\g\u\e\4\q\5\d\2\b\a\r\s\g\6\0\c\f\9\n\f\3\5\5\o\n\2\o\n\2\0\t\9\w\c\j\a\1\2\s\r\u\1\6\f\b\b\d\o\j\o\5\c\y\p\m\t\4\k\5\q\l\u\a\d\n\3\p\4\t\z\n\f\o\0\q\i\3\o\b\b\f\4\k\4\l\s\3\w\3\1\k\i\u\5\w\y\5\l\q\q\p\z\d\1\f\1\w\x\i\9\t\2\n\x\9\d\e\c\b\d\q\h\q\q\k\y\z\r\2\5\x\d\y\f\u\7\2\v\g\s\3\s\w\m\p\3\j\y\v\1\r\z\v\o\t\f\9\d\e\s\5\p\u\e\w\r\z\d\g\9\d\y\m\2\b\9\6\t\l\p\p\s\e\i\2\j\z\a\y\d\b\g\b\n\f\v\g\m\x\6\a\m\6\8\3\y\6\e\g\q\8\h\m\f\k\a\4\s\8\0\3\r\y\y\k\i\g\c\i\3\z\h\y\m\0\s\r\6\h\p\s\3\6\o\u\k\k\w\6\k\6\f\j\s\1\s\6\y\z\n\r\5\t\7\g\y\p\h\h\i\y\r\8\n\r\m\z\1\2\q\5\5\5\q\0\l\p\k\j\l\w\8\2\s\o\a\h\8\r\z\j\b\5\p\y\4\w\6\5\m\b\j\u\l\o\k\a\m\v\b\q\p\w\x\3\h\n\a\m\g\d\u\i\9\9\7\h\l\q\n\e\g\b\y\y\5\y\t\n\7\x\r\m\7\2\g\v\9\7\7\v\x\9\i\o\n\d\v\u\s ]] 00:28:04.184 17:06:52 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:04.184 17:06:52 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:04.184 [2024-11-05 17:06:53.002491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:04.184 [2024-11-05 17:06:53.002933] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134679 ] 00:28:04.441 [2024-11-05 17:06:53.170799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.441 [2024-11-05 17:06:53.339429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.007  [2024-11-05T17:06:54.818Z] Copying: 512/512 [B] (average 250 kBps) 00:28:05.941 00:28:05.941 ************************************ 00:28:05.941 END TEST dd_flags_misc 00:28:05.941 ************************************ 00:28:05.941 17:06:54 -- dd/posix.sh@93 -- # [[ d4kso0mik8qdclzjjjap6s2znti34xtm06bruemqqxahsfx8vga4gc0fcdh0ru0nvvv6uz2pq0bifyky9oq9shel2m1rqcjvmvqmtfciq5927p6z8s1sqehao7vtommp2zyu951oam0nqhgue4q5d2barsg60cf9nf355on2on20t9wcja12sru16fbbdojo5cypmt4k5qluadn3p4tznfo0qi3obbf4k4ls3w31kiu5wy5lqqpzd1f1wxi9t2nx9decbdqhqqkyzr25xdyfu72vgs3swmp3jyv1rzvotf9des5puewrzdg9dym2b96tlppsei2jzaydbgbnfvgmx6am683y6egq8hmfka4s803ryykigci3zhym0sr6hps36oukkw6k6fjs1s6yznr5t7gyphhiyr8nrmz12q555q0lpkjlw82soah8rzjb5py4w65mbjulokamvbqpwx3hnamgdui997hlqnegbyy5ytn7xrm72gv977vx9iondvus == \d\4\k\s\o\0\m\i\k\8\q\d\c\l\z\j\j\j\a\p\6\s\2\z\n\t\i\3\4\x\t\m\0\6\b\r\u\e\m\q\q\x\a\h\s\f\x\8\v\g\a\4\g\c\0\f\c\d\h\0\r\u\0\n\v\v\v\6\u\z\2\p\q\0\b\i\f\y\k\y\9\o\q\9\s\h\e\l\2\m\1\r\q\c\j\v\m\v\q\m\t\f\c\i\q\5\9\2\7\p\6\z\8\s\1\s\q\e\h\a\o\7\v\t\o\m\m\p\2\z\y\u\9\5\1\o\a\m\0\n\q\h\g\u\e\4\q\5\d\2\b\a\r\s\g\6\0\c\f\9\n\f\3\5\5\o\n\2\o\n\2\0\t\9\w\c\j\a\1\2\s\r\u\1\6\f\b\b\d\o\j\o\5\c\y\p\m\t\4\k\5\q\l\u\a\d\n\3\p\4\t\z\n\f\o\0\q\i\3\o\b\b\f\4\k\4\l\s\3\w\3\1\k\i\u\5\w\y\5\l\q\q\p\z\d\1\f\1\w\x\i\9\t\2\n\x\9\d\e\c\b\d\q\h\q\q\k\y\z\r\2\5\x\d\y\f\u\7\2\v\g\s\3\s\w\m\p\3\j\y\v\1\r\z\v\o\t\f\9\d\e\s\5\p\u\e\w\r\z\d\g\9\d\y\m\2\b\9\6\t\l\p\p\s\e\i\2\j\z\a\y\d\b\g\b\n\f\v\g\m\x\6\a\m\6\8\3\y\6\e\g\q\8\h\m\f\k\a\4\s\8\0\3\r\y\y\k\i\g\c\i\3\z\h\y\m\0\s\r\6\h\p\s\3\6\o\u\k\k\w\6\k\6\f\j\s\1\s\6\y\z\n\r\5\t\7\g\y\p\h\h\i\y\r\8\n\r\m\z\1\2\q\5\5\5\q\0\l\p\k\j\l\w\8\2\s\o\a\h\8\r\z\j\b\5\p\y\4\w\6\5\m\b\j\u\l\o\k\a\m\v\b\q\p\w\x\3\h\n\a\m\g\d\u\i\9\9\7\h\l\q\n\e\g\b\y\y\5\y\t\n\7\x\r\m\7\2\g\v\9\7\7\v\x\9\i\o\n\d\v\u\s ]] 00:28:05.941 00:28:05.941 real 0m12.757s 00:28:05.941 user 0m9.857s 00:28:05.941 sys 0m1.833s 00:28:05.941 17:06:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:05.941 17:06:54 -- common/autotest_common.sh@10 -- # set +x 00:28:05.941 17:06:54 -- dd/posix.sh@131 -- # tests_forced_aio 00:28:05.941 17:06:54 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:28:05.941 * Second test run, using AIO 00:28:05.941 17:06:54 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:28:05.941 17:06:54 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:28:05.941 17:06:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:05.941 17:06:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:05.941 17:06:54 -- common/autotest_common.sh@10 -- # set +x 00:28:05.941 ************************************ 00:28:05.941 START TEST dd_flag_append_forced_aio 00:28:05.941 ************************************ 00:28:05.941 17:06:54 -- common/autotest_common.sh@1114 -- # append 00:28:05.941 17:06:54 -- dd/posix.sh@16 -- # local dump0 00:28:05.941 17:06:54 -- dd/posix.sh@17 -- # local dump1 00:28:05.941 17:06:54 -- dd/posix.sh@19 -- # gen_bytes 32 00:28:05.941 17:06:54 -- dd/common.sh@98 -- # xtrace_disable 00:28:05.941 17:06:54 -- common/autotest_common.sh@10 -- # set +x 00:28:05.941 17:06:54 -- dd/posix.sh@19 -- # dump0=3mvmnzxo7ulnwmlyfrxubhbv6wzetnt0 00:28:05.941 17:06:54 -- dd/posix.sh@20 -- # gen_bytes 32 00:28:05.941 17:06:54 -- dd/common.sh@98 -- # xtrace_disable 00:28:05.941 17:06:54 -- common/autotest_common.sh@10 -- # set +x 00:28:05.941 17:06:54 -- dd/posix.sh@20 -- # dump1=jhr6wlxdfvhe318cf4tvi5qwyheo0dk1 00:28:05.941 17:06:54 -- dd/posix.sh@22 -- # printf %s 3mvmnzxo7ulnwmlyfrxubhbv6wzetnt0 00:28:05.941 17:06:54 -- dd/posix.sh@23 -- # printf %s jhr6wlxdfvhe318cf4tvi5qwyheo0dk1 00:28:05.941 17:06:54 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:28:05.941 [2024-11-05 17:06:54.674045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:05.941 [2024-11-05 17:06:54.674208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134726 ] 00:28:05.941 [2024-11-05 17:06:54.826365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.199 [2024-11-05 17:06:54.998097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.458  [2024-11-05T17:06:56.287Z] Copying: 32/32 [B] (average 31 kBps) 00:28:07.410 00:28:07.410 17:06:56 -- dd/posix.sh@27 -- # [[ jhr6wlxdfvhe318cf4tvi5qwyheo0dk13mvmnzxo7ulnwmlyfrxubhbv6wzetnt0 == \j\h\r\6\w\l\x\d\f\v\h\e\3\1\8\c\f\4\t\v\i\5\q\w\y\h\e\o\0\d\k\1\3\m\v\m\n\z\x\o\7\u\l\n\w\m\l\y\f\r\x\u\b\h\b\v\6\w\z\e\t\n\t\0 ]] 00:28:07.410 00:28:07.410 real 0m1.583s 00:28:07.410 user 0m1.247s 00:28:07.410 sys 0m0.203s 00:28:07.410 17:06:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:07.410 ************************************ 00:28:07.410 END TEST dd_flag_append_forced_aio 00:28:07.410 ************************************ 00:28:07.410 17:06:56 -- common/autotest_common.sh@10 -- # set +x 00:28:07.410 17:06:56 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:28:07.410 17:06:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:07.410 17:06:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:07.410 17:06:56 -- common/autotest_common.sh@10 -- # set +x 00:28:07.410 ************************************ 00:28:07.410 START TEST dd_flag_directory_forced_aio 00:28:07.410 ************************************ 00:28:07.410 17:06:56 -- common/autotest_common.sh@1114 -- # directory 00:28:07.410 17:06:56 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:07.410 17:06:56 -- common/autotest_common.sh@650 -- # local es=0 00:28:07.410 17:06:56 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:07.410 17:06:56 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:07.410 17:06:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:07.410 17:06:56 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:07.410 17:06:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:07.410 17:06:56 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:07.410 17:06:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:07.410 17:06:56 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:07.410 17:06:56 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:07.410 17:06:56 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:07.682 [2024-11-05 17:06:56.313922] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:07.682 [2024-11-05 17:06:56.314118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134773 ] 00:28:07.682 [2024-11-05 17:06:56.482602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.940 [2024-11-05 17:06:56.647534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.199 [2024-11-05 17:06:56.895331] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:08.199 [2024-11-05 17:06:56.895422] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:08.199 [2024-11-05 17:06:56.895467] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:08.765 [2024-11-05 17:06:57.479457] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:09.023 17:06:57 -- common/autotest_common.sh@653 -- # es=236 00:28:09.023 17:06:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:09.023 17:06:57 -- common/autotest_common.sh@662 -- # es=108 00:28:09.023 17:06:57 -- common/autotest_common.sh@663 -- # case "$es" in 00:28:09.023 17:06:57 -- common/autotest_common.sh@670 -- # es=1 00:28:09.023 17:06:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:09.023 17:06:57 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:09.023 17:06:57 -- common/autotest_common.sh@650 -- # local es=0 00:28:09.024 17:06:57 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:09.024 17:06:57 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:09.024 17:06:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.024 17:06:57 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:09.024 17:06:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.024 17:06:57 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:09.024 17:06:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.024 17:06:57 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:09.024 17:06:57 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:09.024 17:06:57 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:09.024 [2024-11-05 17:06:57.877006] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:09.024 [2024-11-05 17:06:57.877219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134800 ] 00:28:09.282 [2024-11-05 17:06:58.046340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.540 [2024-11-05 17:06:58.205033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.798 [2024-11-05 17:06:58.452597] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:09.798 [2024-11-05 17:06:58.452679] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:09.798 [2024-11-05 17:06:58.452723] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:10.365 [2024-11-05 17:06:59.037428] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:10.623 17:06:59 -- common/autotest_common.sh@653 -- # es=236 00:28:10.623 17:06:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:10.623 17:06:59 -- common/autotest_common.sh@662 -- # es=108 00:28:10.623 17:06:59 -- common/autotest_common.sh@663 -- # case "$es" in 00:28:10.623 17:06:59 -- common/autotest_common.sh@670 -- # es=1 00:28:10.623 17:06:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:10.623 00:28:10.623 real 0m3.116s 00:28:10.623 user 0m2.420s 00:28:10.623 sys 0m0.495s 00:28:10.623 17:06:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:10.623 17:06:59 -- common/autotest_common.sh@10 -- # set +x 00:28:10.623 ************************************ 00:28:10.623 END TEST dd_flag_directory_forced_aio 00:28:10.623 ************************************ 00:28:10.623 17:06:59 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:28:10.623 17:06:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:10.623 17:06:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:10.623 17:06:59 -- common/autotest_common.sh@10 -- # set +x 00:28:10.623 ************************************ 00:28:10.623 START TEST dd_flag_nofollow_forced_aio 00:28:10.623 ************************************ 00:28:10.623 17:06:59 -- common/autotest_common.sh@1114 -- # nofollow 00:28:10.623 17:06:59 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:10.623 17:06:59 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:10.623 17:06:59 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:10.623 17:06:59 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:10.623 17:06:59 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:10.623 17:06:59 -- common/autotest_common.sh@650 -- # local es=0 00:28:10.623 17:06:59 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:10.623 17:06:59 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:10.623 17:06:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:10.623 17:06:59 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:10.623 17:06:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:10.623 17:06:59 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:10.624 17:06:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:10.624 17:06:59 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:10.624 17:06:59 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:10.624 17:06:59 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:10.624 [2024-11-05 17:06:59.493786] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:10.624 [2024-11-05 17:06:59.494563] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134838 ] 00:28:10.895 [2024-11-05 17:06:59.662985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.155 [2024-11-05 17:06:59.841024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.412 [2024-11-05 17:07:00.094366] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:11.412 [2024-11-05 17:07:00.094454] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:11.412 [2024-11-05 17:07:00.094507] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:11.978 [2024-11-05 17:07:00.671763] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:12.236 17:07:00 -- common/autotest_common.sh@653 -- # es=216 00:28:12.236 17:07:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:12.236 17:07:00 -- common/autotest_common.sh@662 -- # es=88 00:28:12.236 17:07:00 -- common/autotest_common.sh@663 -- # case "$es" in 00:28:12.236 17:07:00 -- common/autotest_common.sh@670 -- # es=1 00:28:12.236 17:07:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:12.236 17:07:00 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:12.236 17:07:00 -- common/autotest_common.sh@650 -- # local es=0 00:28:12.236 17:07:00 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:12.236 17:07:00 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:12.236 17:07:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:12.236 17:07:00 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:12.236 17:07:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:12.236 17:07:00 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:12.236 17:07:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:12.236 17:07:00 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:12.236 17:07:00 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:12.236 17:07:00 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:12.236 [2024-11-05 17:07:01.058421] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:12.236 [2024-11-05 17:07:01.058608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134870 ] 00:28:12.494 [2024-11-05 17:07:01.219484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.494 [2024-11-05 17:07:01.377496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.752 [2024-11-05 17:07:01.627548] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:12.752 [2024-11-05 17:07:01.627626] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:12.752 [2024-11-05 17:07:01.627668] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:13.318 [2024-11-05 17:07:02.199473] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:13.885 17:07:02 -- common/autotest_common.sh@653 -- # es=216 00:28:13.885 17:07:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:13.885 17:07:02 -- common/autotest_common.sh@662 -- # es=88 00:28:13.885 17:07:02 -- common/autotest_common.sh@663 -- # case "$es" in 00:28:13.885 17:07:02 -- common/autotest_common.sh@670 -- # es=1 00:28:13.885 17:07:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:13.885 17:07:02 -- dd/posix.sh@46 -- # gen_bytes 512 00:28:13.885 17:07:02 -- dd/common.sh@98 -- # xtrace_disable 00:28:13.885 17:07:02 -- common/autotest_common.sh@10 -- # set +x 00:28:13.885 17:07:02 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:13.885 [2024-11-05 17:07:02.597583] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:13.885 [2024-11-05 17:07:02.597785] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134886 ] 00:28:13.885 [2024-11-05 17:07:02.767030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.143 [2024-11-05 17:07:02.927603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.400  [2024-11-05T17:07:04.211Z] Copying: 512/512 [B] (average 500 kBps) 00:28:15.334 00:28:15.334 17:07:04 -- dd/posix.sh@49 -- # [[ t8voe34ais4quk6vq60aqb277wv85r9vqbhbxeucg7i57hhrzlrushne5j5cbs2h544fqjvosh5bdz5wmc15n8q1o40d5fas7a7f0qmdwgucmavikdqnxynfzblcjjb0fhdonpkyg5ete3ktzfqqtrmlvcyqiy7zypzgd96etu1yfk5zfe7ho2xh5flz4xdguul8oqd6ipn57p98ajlgzh7iwsblclsy6ooskpg5kmsdnfc5iwkuwfwvo90wip06us6rsmru8zlwy5qej3ncy3om8zd0b10dynz9oly8o3vj85y0ag23eu1213fle7q0xv85ah1fm0e35c316xzxesnxdcff1ibb8vrqfcadeidbvq0i044mwraofx9m4hrqexvddak8wh864hk9fyf6en0znt7n5rbab8tpgt02skldl28oskvjnq7ptcm8r4iw1k6qvpsecofrxu56q22iyj6520sji9ixnl8der1vaxe8ujztnejc934dwiy99jn0 == \t\8\v\o\e\3\4\a\i\s\4\q\u\k\6\v\q\6\0\a\q\b\2\7\7\w\v\8\5\r\9\v\q\b\h\b\x\e\u\c\g\7\i\5\7\h\h\r\z\l\r\u\s\h\n\e\5\j\5\c\b\s\2\h\5\4\4\f\q\j\v\o\s\h\5\b\d\z\5\w\m\c\1\5\n\8\q\1\o\4\0\d\5\f\a\s\7\a\7\f\0\q\m\d\w\g\u\c\m\a\v\i\k\d\q\n\x\y\n\f\z\b\l\c\j\j\b\0\f\h\d\o\n\p\k\y\g\5\e\t\e\3\k\t\z\f\q\q\t\r\m\l\v\c\y\q\i\y\7\z\y\p\z\g\d\9\6\e\t\u\1\y\f\k\5\z\f\e\7\h\o\2\x\h\5\f\l\z\4\x\d\g\u\u\l\8\o\q\d\6\i\p\n\5\7\p\9\8\a\j\l\g\z\h\7\i\w\s\b\l\c\l\s\y\6\o\o\s\k\p\g\5\k\m\s\d\n\f\c\5\i\w\k\u\w\f\w\v\o\9\0\w\i\p\0\6\u\s\6\r\s\m\r\u\8\z\l\w\y\5\q\e\j\3\n\c\y\3\o\m\8\z\d\0\b\1\0\d\y\n\z\9\o\l\y\8\o\3\v\j\8\5\y\0\a\g\2\3\e\u\1\2\1\3\f\l\e\7\q\0\x\v\8\5\a\h\1\f\m\0\e\3\5\c\3\1\6\x\z\x\e\s\n\x\d\c\f\f\1\i\b\b\8\v\r\q\f\c\a\d\e\i\d\b\v\q\0\i\0\4\4\m\w\r\a\o\f\x\9\m\4\h\r\q\e\x\v\d\d\a\k\8\w\h\8\6\4\h\k\9\f\y\f\6\e\n\0\z\n\t\7\n\5\r\b\a\b\8\t\p\g\t\0\2\s\k\l\d\l\2\8\o\s\k\v\j\n\q\7\p\t\c\m\8\r\4\i\w\1\k\6\q\v\p\s\e\c\o\f\r\x\u\5\6\q\2\2\i\y\j\6\5\2\0\s\j\i\9\i\x\n\l\8\d\e\r\1\v\a\x\e\8\u\j\z\t\n\e\j\c\9\3\4\d\w\i\y\9\9\j\n\0 ]] 00:28:15.334 00:28:15.334 real 0m4.714s 00:28:15.334 user 0m3.714s 00:28:15.334 sys 0m0.669s 00:28:15.334 17:07:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:15.334 17:07:04 -- common/autotest_common.sh@10 -- # set +x 00:28:15.334 ************************************ 00:28:15.334 END TEST dd_flag_nofollow_forced_aio 00:28:15.334 ************************************ 00:28:15.334 17:07:04 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:28:15.334 17:07:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:15.334 17:07:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:15.334 17:07:04 -- common/autotest_common.sh@10 -- # set +x 00:28:15.334 ************************************ 00:28:15.334 START TEST dd_flag_noatime_forced_aio 00:28:15.334 ************************************ 00:28:15.334 17:07:04 -- common/autotest_common.sh@1114 -- # noatime 00:28:15.334 17:07:04 -- dd/posix.sh@53 -- # local atime_if 00:28:15.334 17:07:04 -- dd/posix.sh@54 -- # local atime_of 00:28:15.334 17:07:04 -- dd/posix.sh@58 -- # gen_bytes 512 00:28:15.334 17:07:04 -- dd/common.sh@98 -- # xtrace_disable 00:28:15.334 17:07:04 -- common/autotest_common.sh@10 -- # set +x 00:28:15.334 17:07:04 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:15.334 17:07:04 -- dd/posix.sh@60 -- # atime_if=1730826423 00:28:15.334 17:07:04 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:15.334 17:07:04 -- dd/posix.sh@61 -- # atime_of=1730826424 00:28:15.334 17:07:04 -- dd/posix.sh@66 -- # sleep 1 00:28:16.709 17:07:05 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:16.709 [2024-11-05 17:07:05.284868] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:16.709 [2024-11-05 17:07:05.285079] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134952 ] 00:28:16.709 [2024-11-05 17:07:05.454666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.968 [2024-11-05 17:07:05.616273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.968  [2024-11-05T17:07:06.779Z] Copying: 512/512 [B] (average 500 kBps) 00:28:17.902 00:28:18.161 17:07:06 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:18.161 17:07:06 -- dd/posix.sh@69 -- # (( atime_if == 1730826423 )) 00:28:18.161 17:07:06 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:18.161 17:07:06 -- dd/posix.sh@70 -- # (( atime_of == 1730826424 )) 00:28:18.161 17:07:06 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:18.161 [2024-11-05 17:07:06.882708] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:18.161 [2024-11-05 17:07:06.883483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134978 ] 00:28:18.161 [2024-11-05 17:07:07.051093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.419 [2024-11-05 17:07:07.218521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.677  [2024-11-05T17:07:08.488Z] Copying: 512/512 [B] (average 500 kBps) 00:28:19.611 00:28:19.611 17:07:08 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:19.611 17:07:08 -- dd/posix.sh@73 -- # (( atime_if < 1730826427 )) 00:28:19.611 00:28:19.611 real 0m4.233s 00:28:19.611 user 0m2.451s 00:28:19.611 sys 0m0.519s 00:28:19.611 17:07:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:19.611 ************************************ 00:28:19.611 END TEST dd_flag_noatime_forced_aio 00:28:19.611 17:07:08 -- common/autotest_common.sh@10 -- # set +x 00:28:19.611 ************************************ 00:28:19.611 17:07:08 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:28:19.611 17:07:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:19.611 17:07:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:19.611 17:07:08 -- common/autotest_common.sh@10 -- # set +x 00:28:19.612 ************************************ 00:28:19.612 START TEST dd_flags_misc_forced_aio 00:28:19.612 ************************************ 00:28:19.612 17:07:08 -- common/autotest_common.sh@1114 -- # io 00:28:19.612 17:07:08 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:28:19.612 17:07:08 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:28:19.612 17:07:08 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:28:19.612 17:07:08 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:19.612 17:07:08 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:19.612 17:07:08 -- dd/common.sh@98 -- # xtrace_disable 00:28:19.612 17:07:08 -- common/autotest_common.sh@10 -- # set +x 00:28:19.612 17:07:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:19.612 17:07:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:19.869 [2024-11-05 17:07:08.547980] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:19.869 [2024-11-05 17:07:08.548177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135020 ] 00:28:19.869 [2024-11-05 17:07:08.714628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.127 [2024-11-05 17:07:08.873199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.386  [2024-11-05T17:07:10.197Z] Copying: 512/512 [B] (average 166 kBps) 00:28:21.320 00:28:21.320 17:07:10 -- dd/posix.sh@93 -- # [[ 50aiemsog9xr9trpcz4qp2p6xjfw6q69dqppt6izee9inpkhk6693q7ka3c639soyof8fai7vak58lcbz6hf8pltc2lpv0mkv3dozr9m2vsqnhsgfd0o2v4wyrjfr1wbcheuuhkk3pmn5irckfqk8a8zxs3gbhpatcpa3up4psl6r7tk5bxt0lahb00j6skbqh1xfxweeaiifaieqm4lkwxizegdfzn046isxtgz4pcalq8kdnxcxsw3m0d85uo35g4whasan3mu9d158x9ykfrkplwhgl5y3p7l4uhkn03jhl97je2pv1ko8tz6ij0lxdyqq6pytoygemdblgmzakvrzm5r1taotv3tfmrinyc23t8aspt1ytssmqz83uo2pmnoyg12cwn3ozkry3oscsnk63wwodlj0k08gz3q1zd7vqcxmk2zjj189qvmhy7l2lhxju5yw4ljpgafqa88mpn4zdxg4dlg5svx0rrmo29mhjs2opu20wndzw10gde3 == \5\0\a\i\e\m\s\o\g\9\x\r\9\t\r\p\c\z\4\q\p\2\p\6\x\j\f\w\6\q\6\9\d\q\p\p\t\6\i\z\e\e\9\i\n\p\k\h\k\6\6\9\3\q\7\k\a\3\c\6\3\9\s\o\y\o\f\8\f\a\i\7\v\a\k\5\8\l\c\b\z\6\h\f\8\p\l\t\c\2\l\p\v\0\m\k\v\3\d\o\z\r\9\m\2\v\s\q\n\h\s\g\f\d\0\o\2\v\4\w\y\r\j\f\r\1\w\b\c\h\e\u\u\h\k\k\3\p\m\n\5\i\r\c\k\f\q\k\8\a\8\z\x\s\3\g\b\h\p\a\t\c\p\a\3\u\p\4\p\s\l\6\r\7\t\k\5\b\x\t\0\l\a\h\b\0\0\j\6\s\k\b\q\h\1\x\f\x\w\e\e\a\i\i\f\a\i\e\q\m\4\l\k\w\x\i\z\e\g\d\f\z\n\0\4\6\i\s\x\t\g\z\4\p\c\a\l\q\8\k\d\n\x\c\x\s\w\3\m\0\d\8\5\u\o\3\5\g\4\w\h\a\s\a\n\3\m\u\9\d\1\5\8\x\9\y\k\f\r\k\p\l\w\h\g\l\5\y\3\p\7\l\4\u\h\k\n\0\3\j\h\l\9\7\j\e\2\p\v\1\k\o\8\t\z\6\i\j\0\l\x\d\y\q\q\6\p\y\t\o\y\g\e\m\d\b\l\g\m\z\a\k\v\r\z\m\5\r\1\t\a\o\t\v\3\t\f\m\r\i\n\y\c\2\3\t\8\a\s\p\t\1\y\t\s\s\m\q\z\8\3\u\o\2\p\m\n\o\y\g\1\2\c\w\n\3\o\z\k\r\y\3\o\s\c\s\n\k\6\3\w\w\o\d\l\j\0\k\0\8\g\z\3\q\1\z\d\7\v\q\c\x\m\k\2\z\j\j\1\8\9\q\v\m\h\y\7\l\2\l\h\x\j\u\5\y\w\4\l\j\p\g\a\f\q\a\8\8\m\p\n\4\z\d\x\g\4\d\l\g\5\s\v\x\0\r\r\m\o\2\9\m\h\j\s\2\o\p\u\2\0\w\n\d\z\w\1\0\g\d\e\3 ]] 00:28:21.320 17:07:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:21.320 17:07:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:21.320 [2024-11-05 17:07:10.144944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:21.320 [2024-11-05 17:07:10.145163] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135047 ] 00:28:21.578 [2024-11-05 17:07:10.314142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.578 [2024-11-05 17:07:10.475021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.841  [2024-11-05T17:07:11.687Z] Copying: 512/512 [B] (average 500 kBps) 00:28:22.810 00:28:22.810 17:07:11 -- dd/posix.sh@93 -- # [[ 50aiemsog9xr9trpcz4qp2p6xjfw6q69dqppt6izee9inpkhk6693q7ka3c639soyof8fai7vak58lcbz6hf8pltc2lpv0mkv3dozr9m2vsqnhsgfd0o2v4wyrjfr1wbcheuuhkk3pmn5irckfqk8a8zxs3gbhpatcpa3up4psl6r7tk5bxt0lahb00j6skbqh1xfxweeaiifaieqm4lkwxizegdfzn046isxtgz4pcalq8kdnxcxsw3m0d85uo35g4whasan3mu9d158x9ykfrkplwhgl5y3p7l4uhkn03jhl97je2pv1ko8tz6ij0lxdyqq6pytoygemdblgmzakvrzm5r1taotv3tfmrinyc23t8aspt1ytssmqz83uo2pmnoyg12cwn3ozkry3oscsnk63wwodlj0k08gz3q1zd7vqcxmk2zjj189qvmhy7l2lhxju5yw4ljpgafqa88mpn4zdxg4dlg5svx0rrmo29mhjs2opu20wndzw10gde3 == \5\0\a\i\e\m\s\o\g\9\x\r\9\t\r\p\c\z\4\q\p\2\p\6\x\j\f\w\6\q\6\9\d\q\p\p\t\6\i\z\e\e\9\i\n\p\k\h\k\6\6\9\3\q\7\k\a\3\c\6\3\9\s\o\y\o\f\8\f\a\i\7\v\a\k\5\8\l\c\b\z\6\h\f\8\p\l\t\c\2\l\p\v\0\m\k\v\3\d\o\z\r\9\m\2\v\s\q\n\h\s\g\f\d\0\o\2\v\4\w\y\r\j\f\r\1\w\b\c\h\e\u\u\h\k\k\3\p\m\n\5\i\r\c\k\f\q\k\8\a\8\z\x\s\3\g\b\h\p\a\t\c\p\a\3\u\p\4\p\s\l\6\r\7\t\k\5\b\x\t\0\l\a\h\b\0\0\j\6\s\k\b\q\h\1\x\f\x\w\e\e\a\i\i\f\a\i\e\q\m\4\l\k\w\x\i\z\e\g\d\f\z\n\0\4\6\i\s\x\t\g\z\4\p\c\a\l\q\8\k\d\n\x\c\x\s\w\3\m\0\d\8\5\u\o\3\5\g\4\w\h\a\s\a\n\3\m\u\9\d\1\5\8\x\9\y\k\f\r\k\p\l\w\h\g\l\5\y\3\p\7\l\4\u\h\k\n\0\3\j\h\l\9\7\j\e\2\p\v\1\k\o\8\t\z\6\i\j\0\l\x\d\y\q\q\6\p\y\t\o\y\g\e\m\d\b\l\g\m\z\a\k\v\r\z\m\5\r\1\t\a\o\t\v\3\t\f\m\r\i\n\y\c\2\3\t\8\a\s\p\t\1\y\t\s\s\m\q\z\8\3\u\o\2\p\m\n\o\y\g\1\2\c\w\n\3\o\z\k\r\y\3\o\s\c\s\n\k\6\3\w\w\o\d\l\j\0\k\0\8\g\z\3\q\1\z\d\7\v\q\c\x\m\k\2\z\j\j\1\8\9\q\v\m\h\y\7\l\2\l\h\x\j\u\5\y\w\4\l\j\p\g\a\f\q\a\8\8\m\p\n\4\z\d\x\g\4\d\l\g\5\s\v\x\0\r\r\m\o\2\9\m\h\j\s\2\o\p\u\2\0\w\n\d\z\w\1\0\g\d\e\3 ]] 00:28:22.810 17:07:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:22.810 17:07:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:23.069 [2024-11-05 17:07:11.735356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:23.069 [2024-11-05 17:07:11.735582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135064 ] 00:28:23.069 [2024-11-05 17:07:11.906218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.327 [2024-11-05 17:07:12.065697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.585  [2024-11-05T17:07:13.396Z] Copying: 512/512 [B] (average 125 kBps) 00:28:24.519 00:28:24.519 17:07:13 -- dd/posix.sh@93 -- # [[ 50aiemsog9xr9trpcz4qp2p6xjfw6q69dqppt6izee9inpkhk6693q7ka3c639soyof8fai7vak58lcbz6hf8pltc2lpv0mkv3dozr9m2vsqnhsgfd0o2v4wyrjfr1wbcheuuhkk3pmn5irckfqk8a8zxs3gbhpatcpa3up4psl6r7tk5bxt0lahb00j6skbqh1xfxweeaiifaieqm4lkwxizegdfzn046isxtgz4pcalq8kdnxcxsw3m0d85uo35g4whasan3mu9d158x9ykfrkplwhgl5y3p7l4uhkn03jhl97je2pv1ko8tz6ij0lxdyqq6pytoygemdblgmzakvrzm5r1taotv3tfmrinyc23t8aspt1ytssmqz83uo2pmnoyg12cwn3ozkry3oscsnk63wwodlj0k08gz3q1zd7vqcxmk2zjj189qvmhy7l2lhxju5yw4ljpgafqa88mpn4zdxg4dlg5svx0rrmo29mhjs2opu20wndzw10gde3 == \5\0\a\i\e\m\s\o\g\9\x\r\9\t\r\p\c\z\4\q\p\2\p\6\x\j\f\w\6\q\6\9\d\q\p\p\t\6\i\z\e\e\9\i\n\p\k\h\k\6\6\9\3\q\7\k\a\3\c\6\3\9\s\o\y\o\f\8\f\a\i\7\v\a\k\5\8\l\c\b\z\6\h\f\8\p\l\t\c\2\l\p\v\0\m\k\v\3\d\o\z\r\9\m\2\v\s\q\n\h\s\g\f\d\0\o\2\v\4\w\y\r\j\f\r\1\w\b\c\h\e\u\u\h\k\k\3\p\m\n\5\i\r\c\k\f\q\k\8\a\8\z\x\s\3\g\b\h\p\a\t\c\p\a\3\u\p\4\p\s\l\6\r\7\t\k\5\b\x\t\0\l\a\h\b\0\0\j\6\s\k\b\q\h\1\x\f\x\w\e\e\a\i\i\f\a\i\e\q\m\4\l\k\w\x\i\z\e\g\d\f\z\n\0\4\6\i\s\x\t\g\z\4\p\c\a\l\q\8\k\d\n\x\c\x\s\w\3\m\0\d\8\5\u\o\3\5\g\4\w\h\a\s\a\n\3\m\u\9\d\1\5\8\x\9\y\k\f\r\k\p\l\w\h\g\l\5\y\3\p\7\l\4\u\h\k\n\0\3\j\h\l\9\7\j\e\2\p\v\1\k\o\8\t\z\6\i\j\0\l\x\d\y\q\q\6\p\y\t\o\y\g\e\m\d\b\l\g\m\z\a\k\v\r\z\m\5\r\1\t\a\o\t\v\3\t\f\m\r\i\n\y\c\2\3\t\8\a\s\p\t\1\y\t\s\s\m\q\z\8\3\u\o\2\p\m\n\o\y\g\1\2\c\w\n\3\o\z\k\r\y\3\o\s\c\s\n\k\6\3\w\w\o\d\l\j\0\k\0\8\g\z\3\q\1\z\d\7\v\q\c\x\m\k\2\z\j\j\1\8\9\q\v\m\h\y\7\l\2\l\h\x\j\u\5\y\w\4\l\j\p\g\a\f\q\a\8\8\m\p\n\4\z\d\x\g\4\d\l\g\5\s\v\x\0\r\r\m\o\2\9\m\h\j\s\2\o\p\u\2\0\w\n\d\z\w\1\0\g\d\e\3 ]] 00:28:24.519 17:07:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:24.519 17:07:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:24.519 [2024-11-05 17:07:13.316238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:24.519 [2024-11-05 17:07:13.316413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135088 ] 00:28:24.778 [2024-11-05 17:07:13.467791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.778 [2024-11-05 17:07:13.631070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.036  [2024-11-05T17:07:14.848Z] Copying: 512/512 [B] (average 166 kBps) 00:28:25.971 00:28:25.971 17:07:14 -- dd/posix.sh@93 -- # [[ 50aiemsog9xr9trpcz4qp2p6xjfw6q69dqppt6izee9inpkhk6693q7ka3c639soyof8fai7vak58lcbz6hf8pltc2lpv0mkv3dozr9m2vsqnhsgfd0o2v4wyrjfr1wbcheuuhkk3pmn5irckfqk8a8zxs3gbhpatcpa3up4psl6r7tk5bxt0lahb00j6skbqh1xfxweeaiifaieqm4lkwxizegdfzn046isxtgz4pcalq8kdnxcxsw3m0d85uo35g4whasan3mu9d158x9ykfrkplwhgl5y3p7l4uhkn03jhl97je2pv1ko8tz6ij0lxdyqq6pytoygemdblgmzakvrzm5r1taotv3tfmrinyc23t8aspt1ytssmqz83uo2pmnoyg12cwn3ozkry3oscsnk63wwodlj0k08gz3q1zd7vqcxmk2zjj189qvmhy7l2lhxju5yw4ljpgafqa88mpn4zdxg4dlg5svx0rrmo29mhjs2opu20wndzw10gde3 == \5\0\a\i\e\m\s\o\g\9\x\r\9\t\r\p\c\z\4\q\p\2\p\6\x\j\f\w\6\q\6\9\d\q\p\p\t\6\i\z\e\e\9\i\n\p\k\h\k\6\6\9\3\q\7\k\a\3\c\6\3\9\s\o\y\o\f\8\f\a\i\7\v\a\k\5\8\l\c\b\z\6\h\f\8\p\l\t\c\2\l\p\v\0\m\k\v\3\d\o\z\r\9\m\2\v\s\q\n\h\s\g\f\d\0\o\2\v\4\w\y\r\j\f\r\1\w\b\c\h\e\u\u\h\k\k\3\p\m\n\5\i\r\c\k\f\q\k\8\a\8\z\x\s\3\g\b\h\p\a\t\c\p\a\3\u\p\4\p\s\l\6\r\7\t\k\5\b\x\t\0\l\a\h\b\0\0\j\6\s\k\b\q\h\1\x\f\x\w\e\e\a\i\i\f\a\i\e\q\m\4\l\k\w\x\i\z\e\g\d\f\z\n\0\4\6\i\s\x\t\g\z\4\p\c\a\l\q\8\k\d\n\x\c\x\s\w\3\m\0\d\8\5\u\o\3\5\g\4\w\h\a\s\a\n\3\m\u\9\d\1\5\8\x\9\y\k\f\r\k\p\l\w\h\g\l\5\y\3\p\7\l\4\u\h\k\n\0\3\j\h\l\9\7\j\e\2\p\v\1\k\o\8\t\z\6\i\j\0\l\x\d\y\q\q\6\p\y\t\o\y\g\e\m\d\b\l\g\m\z\a\k\v\r\z\m\5\r\1\t\a\o\t\v\3\t\f\m\r\i\n\y\c\2\3\t\8\a\s\p\t\1\y\t\s\s\m\q\z\8\3\u\o\2\p\m\n\o\y\g\1\2\c\w\n\3\o\z\k\r\y\3\o\s\c\s\n\k\6\3\w\w\o\d\l\j\0\k\0\8\g\z\3\q\1\z\d\7\v\q\c\x\m\k\2\z\j\j\1\8\9\q\v\m\h\y\7\l\2\l\h\x\j\u\5\y\w\4\l\j\p\g\a\f\q\a\8\8\m\p\n\4\z\d\x\g\4\d\l\g\5\s\v\x\0\r\r\m\o\2\9\m\h\j\s\2\o\p\u\2\0\w\n\d\z\w\1\0\g\d\e\3 ]] 00:28:25.971 17:07:14 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:25.971 17:07:14 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:25.971 17:07:14 -- dd/common.sh@98 -- # xtrace_disable 00:28:25.971 17:07:14 -- common/autotest_common.sh@10 -- # set +x 00:28:25.971 17:07:14 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:25.971 17:07:14 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:26.230 [2024-11-05 17:07:14.900858] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:26.230 [2024-11-05 17:07:14.901072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135119 ] 00:28:26.230 [2024-11-05 17:07:15.070565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.489 [2024-11-05 17:07:15.230707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.747  [2024-11-05T17:07:16.558Z] Copying: 512/512 [B] (average 500 kBps) 00:28:27.681 00:28:27.681 17:07:16 -- dd/posix.sh@93 -- # [[ 9u2mxlhemm8srr42hb0fgodrmo1kp8jfanqjqw35ezj0qv2x5737r14t65axu0wr9z5sprg83fgwx0x6n32ht1lzo6n5ohv9db67q5vlkuqttt5l46ve20ftl2rzcyaxidtehjnuum7jnstbioehhrdx8g7jkxgh2kbcyd3ptl3o5ip4bxjxgpxy3yhk4nmy8mbpapo3o8zlnrqg80j4ymsew07wdeq9unoz3cevvvtz8r3m9i7tblatr4yzgcwhrzsr1lvhw6zxmmytauw4ve2ub36n1u676ch9lpbqvhzhvlns9mddllv9w0v3gpsvdug5aks22jwtjgwos1ehpnz6f9glhqmj5ursktsc98ins2m9y85qklq7u6idpfv6obcbh14ziwnu3fdvb3g5s7mtkhyp1canoz5le1sd1732adyu0n56igp8osi60s8rumw7waazjp29y6uevvgrdsyeky7bhjpwkx8s0sj618zrz6vxudqarz9m1y4wvl8h == \9\u\2\m\x\l\h\e\m\m\8\s\r\r\4\2\h\b\0\f\g\o\d\r\m\o\1\k\p\8\j\f\a\n\q\j\q\w\3\5\e\z\j\0\q\v\2\x\5\7\3\7\r\1\4\t\6\5\a\x\u\0\w\r\9\z\5\s\p\r\g\8\3\f\g\w\x\0\x\6\n\3\2\h\t\1\l\z\o\6\n\5\o\h\v\9\d\b\6\7\q\5\v\l\k\u\q\t\t\t\5\l\4\6\v\e\2\0\f\t\l\2\r\z\c\y\a\x\i\d\t\e\h\j\n\u\u\m\7\j\n\s\t\b\i\o\e\h\h\r\d\x\8\g\7\j\k\x\g\h\2\k\b\c\y\d\3\p\t\l\3\o\5\i\p\4\b\x\j\x\g\p\x\y\3\y\h\k\4\n\m\y\8\m\b\p\a\p\o\3\o\8\z\l\n\r\q\g\8\0\j\4\y\m\s\e\w\0\7\w\d\e\q\9\u\n\o\z\3\c\e\v\v\v\t\z\8\r\3\m\9\i\7\t\b\l\a\t\r\4\y\z\g\c\w\h\r\z\s\r\1\l\v\h\w\6\z\x\m\m\y\t\a\u\w\4\v\e\2\u\b\3\6\n\1\u\6\7\6\c\h\9\l\p\b\q\v\h\z\h\v\l\n\s\9\m\d\d\l\l\v\9\w\0\v\3\g\p\s\v\d\u\g\5\a\k\s\2\2\j\w\t\j\g\w\o\s\1\e\h\p\n\z\6\f\9\g\l\h\q\m\j\5\u\r\s\k\t\s\c\9\8\i\n\s\2\m\9\y\8\5\q\k\l\q\7\u\6\i\d\p\f\v\6\o\b\c\b\h\1\4\z\i\w\n\u\3\f\d\v\b\3\g\5\s\7\m\t\k\h\y\p\1\c\a\n\o\z\5\l\e\1\s\d\1\7\3\2\a\d\y\u\0\n\5\6\i\g\p\8\o\s\i\6\0\s\8\r\u\m\w\7\w\a\a\z\j\p\2\9\y\6\u\e\v\v\g\r\d\s\y\e\k\y\7\b\h\j\p\w\k\x\8\s\0\s\j\6\1\8\z\r\z\6\v\x\u\d\q\a\r\z\9\m\1\y\4\w\v\l\8\h ]] 00:28:27.681 17:07:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:27.681 17:07:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:27.681 [2024-11-05 17:07:16.471725] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:27.681 [2024-11-05 17:07:16.471890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135136 ] 00:28:27.940 [2024-11-05 17:07:16.622987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.940 [2024-11-05 17:07:16.778447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.198  [2024-11-05T17:07:18.009Z] Copying: 512/512 [B] (average 500 kBps) 00:28:29.132 00:28:29.133 17:07:17 -- dd/posix.sh@93 -- # [[ 9u2mxlhemm8srr42hb0fgodrmo1kp8jfanqjqw35ezj0qv2x5737r14t65axu0wr9z5sprg83fgwx0x6n32ht1lzo6n5ohv9db67q5vlkuqttt5l46ve20ftl2rzcyaxidtehjnuum7jnstbioehhrdx8g7jkxgh2kbcyd3ptl3o5ip4bxjxgpxy3yhk4nmy8mbpapo3o8zlnrqg80j4ymsew07wdeq9unoz3cevvvtz8r3m9i7tblatr4yzgcwhrzsr1lvhw6zxmmytauw4ve2ub36n1u676ch9lpbqvhzhvlns9mddllv9w0v3gpsvdug5aks22jwtjgwos1ehpnz6f9glhqmj5ursktsc98ins2m9y85qklq7u6idpfv6obcbh14ziwnu3fdvb3g5s7mtkhyp1canoz5le1sd1732adyu0n56igp8osi60s8rumw7waazjp29y6uevvgrdsyeky7bhjpwkx8s0sj618zrz6vxudqarz9m1y4wvl8h == \9\u\2\m\x\l\h\e\m\m\8\s\r\r\4\2\h\b\0\f\g\o\d\r\m\o\1\k\p\8\j\f\a\n\q\j\q\w\3\5\e\z\j\0\q\v\2\x\5\7\3\7\r\1\4\t\6\5\a\x\u\0\w\r\9\z\5\s\p\r\g\8\3\f\g\w\x\0\x\6\n\3\2\h\t\1\l\z\o\6\n\5\o\h\v\9\d\b\6\7\q\5\v\l\k\u\q\t\t\t\5\l\4\6\v\e\2\0\f\t\l\2\r\z\c\y\a\x\i\d\t\e\h\j\n\u\u\m\7\j\n\s\t\b\i\o\e\h\h\r\d\x\8\g\7\j\k\x\g\h\2\k\b\c\y\d\3\p\t\l\3\o\5\i\p\4\b\x\j\x\g\p\x\y\3\y\h\k\4\n\m\y\8\m\b\p\a\p\o\3\o\8\z\l\n\r\q\g\8\0\j\4\y\m\s\e\w\0\7\w\d\e\q\9\u\n\o\z\3\c\e\v\v\v\t\z\8\r\3\m\9\i\7\t\b\l\a\t\r\4\y\z\g\c\w\h\r\z\s\r\1\l\v\h\w\6\z\x\m\m\y\t\a\u\w\4\v\e\2\u\b\3\6\n\1\u\6\7\6\c\h\9\l\p\b\q\v\h\z\h\v\l\n\s\9\m\d\d\l\l\v\9\w\0\v\3\g\p\s\v\d\u\g\5\a\k\s\2\2\j\w\t\j\g\w\o\s\1\e\h\p\n\z\6\f\9\g\l\h\q\m\j\5\u\r\s\k\t\s\c\9\8\i\n\s\2\m\9\y\8\5\q\k\l\q\7\u\6\i\d\p\f\v\6\o\b\c\b\h\1\4\z\i\w\n\u\3\f\d\v\b\3\g\5\s\7\m\t\k\h\y\p\1\c\a\n\o\z\5\l\e\1\s\d\1\7\3\2\a\d\y\u\0\n\5\6\i\g\p\8\o\s\i\6\0\s\8\r\u\m\w\7\w\a\a\z\j\p\2\9\y\6\u\e\v\v\g\r\d\s\y\e\k\y\7\b\h\j\p\w\k\x\8\s\0\s\j\6\1\8\z\r\z\6\v\x\u\d\q\a\r\z\9\m\1\y\4\w\v\l\8\h ]] 00:28:29.133 17:07:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:29.133 17:07:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:29.133 [2024-11-05 17:07:18.028247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:29.133 [2024-11-05 17:07:18.028413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135160 ] 00:28:29.391 [2024-11-05 17:07:18.181648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.649 [2024-11-05 17:07:18.345480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.907  [2024-11-05T17:07:19.719Z] Copying: 512/512 [B] (average 100 kBps) 00:28:30.842 00:28:30.842 17:07:19 -- dd/posix.sh@93 -- # [[ 9u2mxlhemm8srr42hb0fgodrmo1kp8jfanqjqw35ezj0qv2x5737r14t65axu0wr9z5sprg83fgwx0x6n32ht1lzo6n5ohv9db67q5vlkuqttt5l46ve20ftl2rzcyaxidtehjnuum7jnstbioehhrdx8g7jkxgh2kbcyd3ptl3o5ip4bxjxgpxy3yhk4nmy8mbpapo3o8zlnrqg80j4ymsew07wdeq9unoz3cevvvtz8r3m9i7tblatr4yzgcwhrzsr1lvhw6zxmmytauw4ve2ub36n1u676ch9lpbqvhzhvlns9mddllv9w0v3gpsvdug5aks22jwtjgwos1ehpnz6f9glhqmj5ursktsc98ins2m9y85qklq7u6idpfv6obcbh14ziwnu3fdvb3g5s7mtkhyp1canoz5le1sd1732adyu0n56igp8osi60s8rumw7waazjp29y6uevvgrdsyeky7bhjpwkx8s0sj618zrz6vxudqarz9m1y4wvl8h == \9\u\2\m\x\l\h\e\m\m\8\s\r\r\4\2\h\b\0\f\g\o\d\r\m\o\1\k\p\8\j\f\a\n\q\j\q\w\3\5\e\z\j\0\q\v\2\x\5\7\3\7\r\1\4\t\6\5\a\x\u\0\w\r\9\z\5\s\p\r\g\8\3\f\g\w\x\0\x\6\n\3\2\h\t\1\l\z\o\6\n\5\o\h\v\9\d\b\6\7\q\5\v\l\k\u\q\t\t\t\5\l\4\6\v\e\2\0\f\t\l\2\r\z\c\y\a\x\i\d\t\e\h\j\n\u\u\m\7\j\n\s\t\b\i\o\e\h\h\r\d\x\8\g\7\j\k\x\g\h\2\k\b\c\y\d\3\p\t\l\3\o\5\i\p\4\b\x\j\x\g\p\x\y\3\y\h\k\4\n\m\y\8\m\b\p\a\p\o\3\o\8\z\l\n\r\q\g\8\0\j\4\y\m\s\e\w\0\7\w\d\e\q\9\u\n\o\z\3\c\e\v\v\v\t\z\8\r\3\m\9\i\7\t\b\l\a\t\r\4\y\z\g\c\w\h\r\z\s\r\1\l\v\h\w\6\z\x\m\m\y\t\a\u\w\4\v\e\2\u\b\3\6\n\1\u\6\7\6\c\h\9\l\p\b\q\v\h\z\h\v\l\n\s\9\m\d\d\l\l\v\9\w\0\v\3\g\p\s\v\d\u\g\5\a\k\s\2\2\j\w\t\j\g\w\o\s\1\e\h\p\n\z\6\f\9\g\l\h\q\m\j\5\u\r\s\k\t\s\c\9\8\i\n\s\2\m\9\y\8\5\q\k\l\q\7\u\6\i\d\p\f\v\6\o\b\c\b\h\1\4\z\i\w\n\u\3\f\d\v\b\3\g\5\s\7\m\t\k\h\y\p\1\c\a\n\o\z\5\l\e\1\s\d\1\7\3\2\a\d\y\u\0\n\5\6\i\g\p\8\o\s\i\6\0\s\8\r\u\m\w\7\w\a\a\z\j\p\2\9\y\6\u\e\v\v\g\r\d\s\y\e\k\y\7\b\h\j\p\w\k\x\8\s\0\s\j\6\1\8\z\r\z\6\v\x\u\d\q\a\r\z\9\m\1\y\4\w\v\l\8\h ]] 00:28:30.842 17:07:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:30.842 17:07:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:30.842 [2024-11-05 17:07:19.598097] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:30.842 [2024-11-05 17:07:19.598247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135181 ] 00:28:31.100 [2024-11-05 17:07:19.750895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.100 [2024-11-05 17:07:19.909594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.358  [2024-11-05T17:07:21.170Z] Copying: 512/512 [B] (average 166 kBps) 00:28:32.293 00:28:32.293 ************************************ 00:28:32.293 END TEST dd_flags_misc_forced_aio 00:28:32.293 ************************************ 00:28:32.294 17:07:21 -- dd/posix.sh@93 -- # [[ 9u2mxlhemm8srr42hb0fgodrmo1kp8jfanqjqw35ezj0qv2x5737r14t65axu0wr9z5sprg83fgwx0x6n32ht1lzo6n5ohv9db67q5vlkuqttt5l46ve20ftl2rzcyaxidtehjnuum7jnstbioehhrdx8g7jkxgh2kbcyd3ptl3o5ip4bxjxgpxy3yhk4nmy8mbpapo3o8zlnrqg80j4ymsew07wdeq9unoz3cevvvtz8r3m9i7tblatr4yzgcwhrzsr1lvhw6zxmmytauw4ve2ub36n1u676ch9lpbqvhzhvlns9mddllv9w0v3gpsvdug5aks22jwtjgwos1ehpnz6f9glhqmj5ursktsc98ins2m9y85qklq7u6idpfv6obcbh14ziwnu3fdvb3g5s7mtkhyp1canoz5le1sd1732adyu0n56igp8osi60s8rumw7waazjp29y6uevvgrdsyeky7bhjpwkx8s0sj618zrz6vxudqarz9m1y4wvl8h == \9\u\2\m\x\l\h\e\m\m\8\s\r\r\4\2\h\b\0\f\g\o\d\r\m\o\1\k\p\8\j\f\a\n\q\j\q\w\3\5\e\z\j\0\q\v\2\x\5\7\3\7\r\1\4\t\6\5\a\x\u\0\w\r\9\z\5\s\p\r\g\8\3\f\g\w\x\0\x\6\n\3\2\h\t\1\l\z\o\6\n\5\o\h\v\9\d\b\6\7\q\5\v\l\k\u\q\t\t\t\5\l\4\6\v\e\2\0\f\t\l\2\r\z\c\y\a\x\i\d\t\e\h\j\n\u\u\m\7\j\n\s\t\b\i\o\e\h\h\r\d\x\8\g\7\j\k\x\g\h\2\k\b\c\y\d\3\p\t\l\3\o\5\i\p\4\b\x\j\x\g\p\x\y\3\y\h\k\4\n\m\y\8\m\b\p\a\p\o\3\o\8\z\l\n\r\q\g\8\0\j\4\y\m\s\e\w\0\7\w\d\e\q\9\u\n\o\z\3\c\e\v\v\v\t\z\8\r\3\m\9\i\7\t\b\l\a\t\r\4\y\z\g\c\w\h\r\z\s\r\1\l\v\h\w\6\z\x\m\m\y\t\a\u\w\4\v\e\2\u\b\3\6\n\1\u\6\7\6\c\h\9\l\p\b\q\v\h\z\h\v\l\n\s\9\m\d\d\l\l\v\9\w\0\v\3\g\p\s\v\d\u\g\5\a\k\s\2\2\j\w\t\j\g\w\o\s\1\e\h\p\n\z\6\f\9\g\l\h\q\m\j\5\u\r\s\k\t\s\c\9\8\i\n\s\2\m\9\y\8\5\q\k\l\q\7\u\6\i\d\p\f\v\6\o\b\c\b\h\1\4\z\i\w\n\u\3\f\d\v\b\3\g\5\s\7\m\t\k\h\y\p\1\c\a\n\o\z\5\l\e\1\s\d\1\7\3\2\a\d\y\u\0\n\5\6\i\g\p\8\o\s\i\6\0\s\8\r\u\m\w\7\w\a\a\z\j\p\2\9\y\6\u\e\v\v\g\r\d\s\y\e\k\y\7\b\h\j\p\w\k\x\8\s\0\s\j\6\1\8\z\r\z\6\v\x\u\d\q\a\r\z\9\m\1\y\4\w\v\l\8\h ]] 00:28:32.294 00:28:32.294 real 0m12.642s 00:28:32.294 user 0m9.797s 00:28:32.294 sys 0m1.775s 00:28:32.294 17:07:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:32.294 17:07:21 -- common/autotest_common.sh@10 -- # set +x 00:28:32.294 17:07:21 -- dd/posix.sh@1 -- # cleanup 00:28:32.294 17:07:21 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:32.294 17:07:21 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:32.294 00:28:32.294 real 0m53.302s 00:28:32.294 user 0m39.725s 00:28:32.294 sys 0m7.563s 00:28:32.294 17:07:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:32.294 17:07:21 -- common/autotest_common.sh@10 -- # set +x 00:28:32.294 ************************************ 00:28:32.294 END TEST spdk_dd_posix 00:28:32.294 ************************************ 00:28:32.553 17:07:21 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:28:32.553 17:07:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:32.553 17:07:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:32.553 17:07:21 -- common/autotest_common.sh@10 -- # set +x 00:28:32.553 ************************************ 00:28:32.553 START TEST spdk_dd_malloc 00:28:32.553 ************************************ 00:28:32.553 17:07:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:28:32.553 * Looking for test storage... 00:28:32.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:32.553 17:07:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:32.553 17:07:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:32.553 17:07:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:32.553 17:07:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:32.553 17:07:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:32.553 17:07:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:32.553 17:07:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:32.553 17:07:21 -- scripts/common.sh@335 -- # IFS=.-: 00:28:32.553 17:07:21 -- scripts/common.sh@335 -- # read -ra ver1 00:28:32.553 17:07:21 -- scripts/common.sh@336 -- # IFS=.-: 00:28:32.553 17:07:21 -- scripts/common.sh@336 -- # read -ra ver2 00:28:32.553 17:07:21 -- scripts/common.sh@337 -- # local 'op=<' 00:28:32.553 17:07:21 -- scripts/common.sh@339 -- # ver1_l=2 00:28:32.553 17:07:21 -- scripts/common.sh@340 -- # ver2_l=1 00:28:32.553 17:07:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:32.553 17:07:21 -- scripts/common.sh@343 -- # case "$op" in 00:28:32.553 17:07:21 -- scripts/common.sh@344 -- # : 1 00:28:32.553 17:07:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:32.553 17:07:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:32.553 17:07:21 -- scripts/common.sh@364 -- # decimal 1 00:28:32.553 17:07:21 -- scripts/common.sh@352 -- # local d=1 00:28:32.553 17:07:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:32.553 17:07:21 -- scripts/common.sh@354 -- # echo 1 00:28:32.553 17:07:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:32.553 17:07:21 -- scripts/common.sh@365 -- # decimal 2 00:28:32.553 17:07:21 -- scripts/common.sh@352 -- # local d=2 00:28:32.553 17:07:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:32.553 17:07:21 -- scripts/common.sh@354 -- # echo 2 00:28:32.553 17:07:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:32.553 17:07:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:32.553 17:07:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:32.553 17:07:21 -- scripts/common.sh@367 -- # return 0 00:28:32.553 17:07:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:32.553 17:07:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:32.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.553 --rc genhtml_branch_coverage=1 00:28:32.553 --rc genhtml_function_coverage=1 00:28:32.553 --rc genhtml_legend=1 00:28:32.553 --rc geninfo_all_blocks=1 00:28:32.553 --rc geninfo_unexecuted_blocks=1 00:28:32.553 00:28:32.553 ' 00:28:32.553 17:07:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:32.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.553 --rc genhtml_branch_coverage=1 00:28:32.553 --rc genhtml_function_coverage=1 00:28:32.553 --rc genhtml_legend=1 00:28:32.553 --rc geninfo_all_blocks=1 00:28:32.553 --rc geninfo_unexecuted_blocks=1 00:28:32.553 00:28:32.553 ' 00:28:32.553 17:07:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:32.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.553 --rc genhtml_branch_coverage=1 00:28:32.553 --rc genhtml_function_coverage=1 00:28:32.553 --rc genhtml_legend=1 00:28:32.553 --rc geninfo_all_blocks=1 00:28:32.553 --rc geninfo_unexecuted_blocks=1 00:28:32.553 00:28:32.553 ' 00:28:32.553 17:07:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:32.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.553 --rc genhtml_branch_coverage=1 00:28:32.553 --rc genhtml_function_coverage=1 00:28:32.553 --rc genhtml_legend=1 00:28:32.553 --rc geninfo_all_blocks=1 00:28:32.553 --rc geninfo_unexecuted_blocks=1 00:28:32.553 00:28:32.553 ' 00:28:32.553 17:07:21 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:32.553 17:07:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.553 17:07:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.553 17:07:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.553 17:07:21 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:32.553 17:07:21 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:32.553 17:07:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:32.553 17:07:21 -- paths/export.sh@5 -- # export PATH 00:28:32.553 17:07:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:32.553 17:07:21 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:28:32.553 17:07:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:32.553 17:07:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:32.553 17:07:21 -- common/autotest_common.sh@10 -- # set +x 00:28:32.553 ************************************ 00:28:32.553 START TEST dd_malloc_copy 00:28:32.553 ************************************ 00:28:32.553 17:07:21 -- common/autotest_common.sh@1114 -- # malloc_copy 00:28:32.553 17:07:21 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:28:32.553 17:07:21 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:28:32.553 17:07:21 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:28:32.553 17:07:21 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:28:32.553 17:07:21 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:28:32.553 17:07:21 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:28:32.553 17:07:21 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:28:32.553 17:07:21 -- dd/malloc.sh@28 -- # gen_conf 00:28:32.554 17:07:21 -- dd/common.sh@31 -- # xtrace_disable 00:28:32.554 17:07:21 -- common/autotest_common.sh@10 -- # set +x 00:28:32.812 [2024-11-05 17:07:21.479343] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:32.812 [2024-11-05 17:07:21.479550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135279 ] 00:28:32.812 { 00:28:32.812 "subsystems": [ 00:28:32.812 { 00:28:32.812 "subsystem": "bdev", 00:28:32.812 "config": [ 00:28:32.812 { 00:28:32.812 "params": { 00:28:32.812 "block_size": 512, 00:28:32.812 "num_blocks": 1048576, 00:28:32.812 "name": "malloc0" 00:28:32.812 }, 00:28:32.812 "method": "bdev_malloc_create" 00:28:32.812 }, 00:28:32.812 { 00:28:32.812 "params": { 00:28:32.812 "block_size": 512, 00:28:32.812 "num_blocks": 1048576, 00:28:32.812 "name": "malloc1" 00:28:32.812 }, 00:28:32.812 "method": "bdev_malloc_create" 00:28:32.812 }, 00:28:32.812 { 00:28:32.812 "method": "bdev_wait_for_examine" 00:28:32.812 } 00:28:32.812 ] 00:28:32.812 } 00:28:32.812 ] 00:28:32.812 } 00:28:32.812 [2024-11-05 17:07:21.646992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.070 [2024-11-05 17:07:21.806154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.599  [2024-11-05T17:07:25.043Z] Copying: 224/512 [MB] (224 MBps) [2024-11-05T17:07:25.302Z] Copying: 449/512 [MB] (224 MBps) [2024-11-05T17:07:28.612Z] Copying: 512/512 [MB] (average 224 MBps) 00:28:39.735 00:28:39.735 17:07:28 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:28:39.735 17:07:28 -- dd/malloc.sh@33 -- # gen_conf 00:28:39.735 17:07:28 -- dd/common.sh@31 -- # xtrace_disable 00:28:39.735 17:07:28 -- common/autotest_common.sh@10 -- # set +x 00:28:39.735 [2024-11-05 17:07:28.213621] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:39.735 [2024-11-05 17:07:28.213798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135375 ] 00:28:39.735 { 00:28:39.735 "subsystems": [ 00:28:39.735 { 00:28:39.735 "subsystem": "bdev", 00:28:39.735 "config": [ 00:28:39.735 { 00:28:39.735 "params": { 00:28:39.735 "block_size": 512, 00:28:39.735 "num_blocks": 1048576, 00:28:39.735 "name": "malloc0" 00:28:39.735 }, 00:28:39.735 "method": "bdev_malloc_create" 00:28:39.735 }, 00:28:39.735 { 00:28:39.735 "params": { 00:28:39.735 "block_size": 512, 00:28:39.735 "num_blocks": 1048576, 00:28:39.735 "name": "malloc1" 00:28:39.735 }, 00:28:39.735 "method": "bdev_malloc_create" 00:28:39.735 }, 00:28:39.735 { 00:28:39.735 "method": "bdev_wait_for_examine" 00:28:39.735 } 00:28:39.735 ] 00:28:39.735 } 00:28:39.735 ] 00:28:39.735 } 00:28:39.735 [2024-11-05 17:07:28.365877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.735 [2024-11-05 17:07:28.521438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.265  [2024-11-05T17:07:31.708Z] Copying: 222/512 [MB] (222 MBps) [2024-11-05T17:07:31.967Z] Copying: 447/512 [MB] (224 MBps) [2024-11-05T17:07:35.249Z] Copying: 512/512 [MB] (average 223 MBps) 00:28:46.372 00:28:46.372 00:28:46.372 real 0m13.468s 00:28:46.372 user 0m12.303s 00:28:46.372 sys 0m1.049s 00:28:46.373 17:07:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:46.373 17:07:34 -- common/autotest_common.sh@10 -- # set +x 00:28:46.373 ************************************ 00:28:46.373 END TEST dd_malloc_copy 00:28:46.373 ************************************ 00:28:46.373 00:28:46.373 real 0m13.700s 00:28:46.373 user 0m12.481s 00:28:46.373 sys 0m1.115s 00:28:46.373 17:07:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:46.373 ************************************ 00:28:46.373 END TEST spdk_dd_malloc 00:28:46.373 ************************************ 00:28:46.373 17:07:34 -- common/autotest_common.sh@10 -- # set +x 00:28:46.373 17:07:34 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:28:46.373 17:07:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:46.373 17:07:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:46.373 17:07:34 -- common/autotest_common.sh@10 -- # set +x 00:28:46.373 ************************************ 00:28:46.373 START TEST spdk_dd_bdev_to_bdev 00:28:46.373 ************************************ 00:28:46.373 17:07:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:28:46.373 * Looking for test storage... 00:28:46.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:46.373 17:07:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:46.373 17:07:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:46.373 17:07:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:46.373 17:07:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:46.373 17:07:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:46.373 17:07:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:46.373 17:07:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:46.373 17:07:35 -- scripts/common.sh@335 -- # IFS=.-: 00:28:46.373 17:07:35 -- scripts/common.sh@335 -- # read -ra ver1 00:28:46.373 17:07:35 -- scripts/common.sh@336 -- # IFS=.-: 00:28:46.373 17:07:35 -- scripts/common.sh@336 -- # read -ra ver2 00:28:46.373 17:07:35 -- scripts/common.sh@337 -- # local 'op=<' 00:28:46.373 17:07:35 -- scripts/common.sh@339 -- # ver1_l=2 00:28:46.373 17:07:35 -- scripts/common.sh@340 -- # ver2_l=1 00:28:46.373 17:07:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:46.373 17:07:35 -- scripts/common.sh@343 -- # case "$op" in 00:28:46.373 17:07:35 -- scripts/common.sh@344 -- # : 1 00:28:46.373 17:07:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:46.373 17:07:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:46.373 17:07:35 -- scripts/common.sh@364 -- # decimal 1 00:28:46.373 17:07:35 -- scripts/common.sh@352 -- # local d=1 00:28:46.373 17:07:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:46.373 17:07:35 -- scripts/common.sh@354 -- # echo 1 00:28:46.373 17:07:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:46.373 17:07:35 -- scripts/common.sh@365 -- # decimal 2 00:28:46.373 17:07:35 -- scripts/common.sh@352 -- # local d=2 00:28:46.373 17:07:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:46.373 17:07:35 -- scripts/common.sh@354 -- # echo 2 00:28:46.373 17:07:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:46.373 17:07:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:46.373 17:07:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:46.373 17:07:35 -- scripts/common.sh@367 -- # return 0 00:28:46.373 17:07:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:46.373 17:07:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:46.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.373 --rc genhtml_branch_coverage=1 00:28:46.373 --rc genhtml_function_coverage=1 00:28:46.373 --rc genhtml_legend=1 00:28:46.373 --rc geninfo_all_blocks=1 00:28:46.373 --rc geninfo_unexecuted_blocks=1 00:28:46.373 00:28:46.373 ' 00:28:46.373 17:07:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:46.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.373 --rc genhtml_branch_coverage=1 00:28:46.373 --rc genhtml_function_coverage=1 00:28:46.373 --rc genhtml_legend=1 00:28:46.373 --rc geninfo_all_blocks=1 00:28:46.373 --rc geninfo_unexecuted_blocks=1 00:28:46.373 00:28:46.373 ' 00:28:46.373 17:07:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:46.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.373 --rc genhtml_branch_coverage=1 00:28:46.373 --rc genhtml_function_coverage=1 00:28:46.373 --rc genhtml_legend=1 00:28:46.373 --rc geninfo_all_blocks=1 00:28:46.373 --rc geninfo_unexecuted_blocks=1 00:28:46.373 00:28:46.373 ' 00:28:46.373 17:07:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:46.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.373 --rc genhtml_branch_coverage=1 00:28:46.373 --rc genhtml_function_coverage=1 00:28:46.373 --rc genhtml_legend=1 00:28:46.373 --rc geninfo_all_blocks=1 00:28:46.373 --rc geninfo_unexecuted_blocks=1 00:28:46.373 00:28:46.373 ' 00:28:46.373 17:07:35 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:46.373 17:07:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.373 17:07:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.373 17:07:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.373 17:07:35 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:46.373 17:07:35 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:46.373 17:07:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:46.373 17:07:35 -- paths/export.sh@5 -- # export PATH 00:28:46.373 17:07:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:46.373 17:07:35 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:28:46.373 17:07:35 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:28:46.373 17:07:35 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:28:46.373 17:07:35 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:28:46.373 17:07:35 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:28:46.373 17:07:35 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:28:46.373 17:07:35 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:28:46.373 17:07:35 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:28:46.373 17:07:35 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:28:46.373 17:07:35 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:28:46.373 17:07:35 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:28:46.373 17:07:35 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:28:46.373 17:07:35 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:28:46.373 17:07:35 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:28:46.373 [2024-11-05 17:07:35.216477] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:46.373 [2024-11-05 17:07:35.217246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135522 ] 00:28:46.632 [2024-11-05 17:07:35.387358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.890 [2024-11-05 17:07:35.545496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.148  [2024-11-05T17:07:36.960Z] Copying: 256/256 [MB] (average 1414 MBps) 00:28:48.083 00:28:48.083 17:07:36 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:48.083 17:07:36 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:48.083 17:07:36 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:28:48.083 17:07:36 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:28:48.083 17:07:36 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:28:48.083 17:07:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:28:48.083 17:07:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:48.083 17:07:36 -- common/autotest_common.sh@10 -- # set +x 00:28:48.083 ************************************ 00:28:48.083 START TEST dd_inflate_file 00:28:48.083 ************************************ 00:28:48.083 17:07:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:28:48.341 [2024-11-05 17:07:37.002125] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:48.341 [2024-11-05 17:07:37.002843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135554 ] 00:28:48.341 [2024-11-05 17:07:37.171410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.600 [2024-11-05 17:07:37.341974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.858  [2024-11-05T17:07:38.669Z] Copying: 64/64 [MB] (average 1422 MBps) 00:28:49.792 00:28:49.792 00:28:49.792 real 0m1.648s 00:28:49.792 user 0m1.274s 00:28:49.792 sys 0m0.245s 00:28:49.792 17:07:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:49.792 17:07:38 -- common/autotest_common.sh@10 -- # set +x 00:28:49.792 ************************************ 00:28:49.792 END TEST dd_inflate_file 00:28:49.792 ************************************ 00:28:49.792 17:07:38 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:28:49.792 17:07:38 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:28:49.792 17:07:38 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:28:49.792 17:07:38 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:28:49.792 17:07:38 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:28:49.792 17:07:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:49.792 17:07:38 -- dd/common.sh@31 -- # xtrace_disable 00:28:49.792 17:07:38 -- common/autotest_common.sh@10 -- # set +x 00:28:49.792 17:07:38 -- common/autotest_common.sh@10 -- # set +x 00:28:49.792 ************************************ 00:28:49.792 START TEST dd_copy_to_out_bdev 00:28:49.792 ************************************ 00:28:49.792 17:07:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:28:50.050 { 00:28:50.050 "subsystems": [ 00:28:50.050 { 00:28:50.050 "subsystem": "bdev", 00:28:50.050 "config": [ 00:28:50.050 { 00:28:50.050 "params": { 00:28:50.050 "block_size": 4096, 00:28:50.050 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:50.050 "name": "aio1" 00:28:50.050 }, 00:28:50.050 "method": "bdev_aio_create" 00:28:50.050 }, 00:28:50.050 { 00:28:50.050 "params": { 00:28:50.050 "trtype": "pcie", 00:28:50.050 "traddr": "0000:00:06.0", 00:28:50.050 "name": "Nvme0" 00:28:50.050 }, 00:28:50.050 "method": "bdev_nvme_attach_controller" 00:28:50.050 }, 00:28:50.050 { 00:28:50.050 "method": "bdev_wait_for_examine" 00:28:50.050 } 00:28:50.050 ] 00:28:50.050 } 00:28:50.050 ] 00:28:50.050 } 00:28:50.050 [2024-11-05 17:07:38.700664] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:50.050 [2024-11-05 17:07:38.700833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135608 ] 00:28:50.050 [2024-11-05 17:07:38.852555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.309 [2024-11-05 17:07:39.021958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.683  [2024-11-05T17:07:41.127Z] Copying: 39/64 [MB] (39 MBps) [2024-11-05T17:07:42.061Z] Copying: 64/64 [MB] (average 39 MBps) 00:28:53.184 00:28:53.184 00:28:53.184 real 0m3.332s 00:28:53.184 user 0m2.959s 00:28:53.184 sys 0m0.274s 00:28:53.184 17:07:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:53.184 17:07:41 -- common/autotest_common.sh@10 -- # set +x 00:28:53.184 ************************************ 00:28:53.184 END TEST dd_copy_to_out_bdev 00:28:53.184 ************************************ 00:28:53.184 17:07:42 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:28:53.184 17:07:42 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:28:53.184 17:07:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:53.184 17:07:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:53.184 17:07:42 -- common/autotest_common.sh@10 -- # set +x 00:28:53.184 ************************************ 00:28:53.184 START TEST dd_offset_magic 00:28:53.184 ************************************ 00:28:53.184 17:07:42 -- common/autotest_common.sh@1114 -- # offset_magic 00:28:53.184 17:07:42 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:28:53.184 17:07:42 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:28:53.184 17:07:42 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:28:53.184 17:07:42 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:28:53.184 17:07:42 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:28:53.184 17:07:42 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:28:53.184 17:07:42 -- dd/common.sh@31 -- # xtrace_disable 00:28:53.184 17:07:42 -- common/autotest_common.sh@10 -- # set +x 00:28:53.442 { 00:28:53.442 "subsystems": [ 00:28:53.442 { 00:28:53.442 "subsystem": "bdev", 00:28:53.442 "config": [ 00:28:53.442 { 00:28:53.442 "params": { 00:28:53.442 "block_size": 4096, 00:28:53.442 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:53.442 "name": "aio1" 00:28:53.442 }, 00:28:53.442 "method": "bdev_aio_create" 00:28:53.442 }, 00:28:53.442 { 00:28:53.442 "params": { 00:28:53.442 "trtype": "pcie", 00:28:53.442 "traddr": "0000:00:06.0", 00:28:53.442 "name": "Nvme0" 00:28:53.442 }, 00:28:53.442 "method": "bdev_nvme_attach_controller" 00:28:53.442 }, 00:28:53.442 { 00:28:53.442 "method": "bdev_wait_for_examine" 00:28:53.442 } 00:28:53.442 ] 00:28:53.442 } 00:28:53.442 ] 00:28:53.442 } 00:28:53.442 [2024-11-05 17:07:42.102218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:53.442 [2024-11-05 17:07:42.102411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135671 ] 00:28:53.442 [2024-11-05 17:07:42.270386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.701 [2024-11-05 17:07:42.437686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.635  [2024-11-05T17:07:44.446Z] Copying: 65/65 [MB] (average 138 MBps) 00:28:55.569 00:28:55.569 17:07:44 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:28:55.569 17:07:44 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:28:55.569 17:07:44 -- dd/common.sh@31 -- # xtrace_disable 00:28:55.569 17:07:44 -- common/autotest_common.sh@10 -- # set +x 00:28:55.827 [2024-11-05 17:07:44.469809] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:55.827 [2024-11-05 17:07:44.469972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135716 ] 00:28:55.827 { 00:28:55.827 "subsystems": [ 00:28:55.827 { 00:28:55.827 "subsystem": "bdev", 00:28:55.827 "config": [ 00:28:55.827 { 00:28:55.827 "params": { 00:28:55.827 "block_size": 4096, 00:28:55.827 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:55.827 "name": "aio1" 00:28:55.827 }, 00:28:55.827 "method": "bdev_aio_create" 00:28:55.827 }, 00:28:55.827 { 00:28:55.827 "params": { 00:28:55.827 "trtype": "pcie", 00:28:55.827 "traddr": "0000:00:06.0", 00:28:55.827 "name": "Nvme0" 00:28:55.827 }, 00:28:55.827 "method": "bdev_nvme_attach_controller" 00:28:55.827 }, 00:28:55.827 { 00:28:55.828 "method": "bdev_wait_for_examine" 00:28:55.828 } 00:28:55.828 ] 00:28:55.828 } 00:28:55.828 ] 00:28:55.828 } 00:28:55.828 [2024-11-05 17:07:44.623650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.086 [2024-11-05 17:07:44.779699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.344  [2024-11-05T17:07:46.187Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:28:57.310 00:28:57.310 17:07:46 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:28:57.310 17:07:46 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:28:57.310 17:07:46 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:28:57.310 17:07:46 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:28:57.310 17:07:46 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:28:57.310 17:07:46 -- dd/common.sh@31 -- # xtrace_disable 00:28:57.310 17:07:46 -- common/autotest_common.sh@10 -- # set +x 00:28:57.310 [2024-11-05 17:07:46.187044] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:57.310 [2024-11-05 17:07:46.187249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135740 ] 00:28:57.310 { 00:28:57.310 "subsystems": [ 00:28:57.310 { 00:28:57.310 "subsystem": "bdev", 00:28:57.310 "config": [ 00:28:57.310 { 00:28:57.310 "params": { 00:28:57.310 "block_size": 4096, 00:28:57.310 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:57.310 "name": "aio1" 00:28:57.310 }, 00:28:57.310 "method": "bdev_aio_create" 00:28:57.310 }, 00:28:57.310 { 00:28:57.310 "params": { 00:28:57.310 "trtype": "pcie", 00:28:57.310 "traddr": "0000:00:06.0", 00:28:57.310 "name": "Nvme0" 00:28:57.310 }, 00:28:57.310 "method": "bdev_nvme_attach_controller" 00:28:57.310 }, 00:28:57.310 { 00:28:57.310 "method": "bdev_wait_for_examine" 00:28:57.310 } 00:28:57.310 ] 00:28:57.310 } 00:28:57.311 ] 00:28:57.311 } 00:28:57.569 [2024-11-05 17:07:46.355403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.827 [2024-11-05 17:07:46.521864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.394  [2024-11-05T17:07:48.205Z] Copying: 65/65 [MB] (average 170 MBps) 00:28:59.328 00:28:59.586 17:07:48 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:28:59.586 17:07:48 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:28:59.586 17:07:48 -- dd/common.sh@31 -- # xtrace_disable 00:28:59.586 17:07:48 -- common/autotest_common.sh@10 -- # set +x 00:28:59.586 [2024-11-05 17:07:48.279847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:59.586 [2024-11-05 17:07:48.280201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135774 ] 00:28:59.586 { 00:28:59.586 "subsystems": [ 00:28:59.586 { 00:28:59.586 "subsystem": "bdev", 00:28:59.586 "config": [ 00:28:59.586 { 00:28:59.586 "params": { 00:28:59.586 "block_size": 4096, 00:28:59.586 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:59.586 "name": "aio1" 00:28:59.586 }, 00:28:59.586 "method": "bdev_aio_create" 00:28:59.586 }, 00:28:59.586 { 00:28:59.586 "params": { 00:28:59.586 "trtype": "pcie", 00:28:59.586 "traddr": "0000:00:06.0", 00:28:59.586 "name": "Nvme0" 00:28:59.586 }, 00:28:59.586 "method": "bdev_nvme_attach_controller" 00:28:59.586 }, 00:28:59.586 { 00:28:59.586 "method": "bdev_wait_for_examine" 00:28:59.586 } 00:28:59.586 ] 00:28:59.586 } 00:28:59.586 ] 00:28:59.586 } 00:28:59.586 [2024-11-05 17:07:48.435357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.845 [2024-11-05 17:07:48.598173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.103  [2024-11-05T17:07:49.913Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:29:01.036 00:29:01.036 17:07:49 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:29:01.036 17:07:49 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:29:01.036 00:29:01.036 real 0m7.896s 00:29:01.036 user 0m5.781s 00:29:01.036 sys 0m1.005s 00:29:01.036 17:07:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:01.036 17:07:49 -- common/autotest_common.sh@10 -- # set +x 00:29:01.036 ************************************ 00:29:01.036 END TEST dd_offset_magic 00:29:01.036 ************************************ 00:29:01.294 17:07:49 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:29:01.294 17:07:49 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:29:01.294 17:07:49 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:01.294 17:07:49 -- dd/common.sh@11 -- # local nvme_ref= 00:29:01.294 17:07:49 -- dd/common.sh@12 -- # local size=4194330 00:29:01.294 17:07:49 -- dd/common.sh@14 -- # local bs=1048576 00:29:01.294 17:07:49 -- dd/common.sh@15 -- # local count=5 00:29:01.294 17:07:49 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:29:01.294 17:07:49 -- dd/common.sh@18 -- # gen_conf 00:29:01.294 17:07:49 -- dd/common.sh@31 -- # xtrace_disable 00:29:01.294 17:07:49 -- common/autotest_common.sh@10 -- # set +x 00:29:01.294 { 00:29:01.294 "subsystems": [ 00:29:01.294 { 00:29:01.294 "subsystem": "bdev", 00:29:01.294 "config": [ 00:29:01.294 { 00:29:01.294 "params": { 00:29:01.294 "block_size": 4096, 00:29:01.294 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:01.294 "name": "aio1" 00:29:01.294 }, 00:29:01.294 "method": "bdev_aio_create" 00:29:01.294 }, 00:29:01.294 { 00:29:01.294 "params": { 00:29:01.294 "trtype": "pcie", 00:29:01.294 "traddr": "0000:00:06.0", 00:29:01.294 "name": "Nvme0" 00:29:01.294 }, 00:29:01.294 "method": "bdev_nvme_attach_controller" 00:29:01.294 }, 00:29:01.294 { 00:29:01.294 "method": "bdev_wait_for_examine" 00:29:01.294 } 00:29:01.294 ] 00:29:01.294 } 00:29:01.294 ] 00:29:01.294 } 00:29:01.294 [2024-11-05 17:07:50.039025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:01.294 [2024-11-05 17:07:50.039225] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135818 ] 00:29:01.552 [2024-11-05 17:07:50.210155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.552 [2024-11-05 17:07:50.385985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.119  [2024-11-05T17:07:51.930Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:29:03.053 00:29:03.053 17:07:51 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:29:03.053 17:07:51 -- dd/common.sh@10 -- # local bdev=aio1 00:29:03.053 17:07:51 -- dd/common.sh@11 -- # local nvme_ref= 00:29:03.053 17:07:51 -- dd/common.sh@12 -- # local size=4194330 00:29:03.053 17:07:51 -- dd/common.sh@14 -- # local bs=1048576 00:29:03.053 17:07:51 -- dd/common.sh@15 -- # local count=5 00:29:03.053 17:07:51 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:29:03.053 17:07:51 -- dd/common.sh@18 -- # gen_conf 00:29:03.053 17:07:51 -- dd/common.sh@31 -- # xtrace_disable 00:29:03.053 17:07:51 -- common/autotest_common.sh@10 -- # set +x 00:29:03.053 [2024-11-05 17:07:51.766243] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:03.053 [2024-11-05 17:07:51.766419] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135851 ] 00:29:03.053 { 00:29:03.053 "subsystems": [ 00:29:03.053 { 00:29:03.053 "subsystem": "bdev", 00:29:03.053 "config": [ 00:29:03.053 { 00:29:03.053 "params": { 00:29:03.053 "block_size": 4096, 00:29:03.053 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:03.053 "name": "aio1" 00:29:03.053 }, 00:29:03.053 "method": "bdev_aio_create" 00:29:03.053 }, 00:29:03.053 { 00:29:03.053 "params": { 00:29:03.053 "trtype": "pcie", 00:29:03.053 "traddr": "0000:00:06.0", 00:29:03.053 "name": "Nvme0" 00:29:03.053 }, 00:29:03.053 "method": "bdev_nvme_attach_controller" 00:29:03.053 }, 00:29:03.053 { 00:29:03.053 "method": "bdev_wait_for_examine" 00:29:03.053 } 00:29:03.053 ] 00:29:03.053 } 00:29:03.053 ] 00:29:03.053 } 00:29:03.053 [2024-11-05 17:07:51.920326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.312 [2024-11-05 17:07:52.077665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.570  [2024-11-05T17:07:53.819Z] Copying: 5120/5120 [kB] (average 217 MBps) 00:29:04.942 00:29:04.942 17:07:53 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:29:04.942 00:29:04.942 real 0m18.515s 00:29:04.942 user 0m14.165s 00:29:04.942 sys 0m2.637s 00:29:04.942 17:07:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:04.942 17:07:53 -- common/autotest_common.sh@10 -- # set +x 00:29:04.942 ************************************ 00:29:04.942 END TEST spdk_dd_bdev_to_bdev 00:29:04.942 ************************************ 00:29:04.942 17:07:53 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:29:04.942 17:07:53 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:29:04.942 17:07:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:04.942 17:07:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:04.942 17:07:53 -- common/autotest_common.sh@10 -- # set +x 00:29:04.942 ************************************ 00:29:04.942 START TEST spdk_dd_sparse 00:29:04.942 ************************************ 00:29:04.942 17:07:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:29:04.942 * Looking for test storage... 00:29:04.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:04.942 17:07:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:04.942 17:07:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:04.942 17:07:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:04.942 17:07:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:04.942 17:07:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:04.942 17:07:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:04.942 17:07:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:04.942 17:07:53 -- scripts/common.sh@335 -- # IFS=.-: 00:29:04.942 17:07:53 -- scripts/common.sh@335 -- # read -ra ver1 00:29:04.942 17:07:53 -- scripts/common.sh@336 -- # IFS=.-: 00:29:04.942 17:07:53 -- scripts/common.sh@336 -- # read -ra ver2 00:29:04.942 17:07:53 -- scripts/common.sh@337 -- # local 'op=<' 00:29:04.942 17:07:53 -- scripts/common.sh@339 -- # ver1_l=2 00:29:04.942 17:07:53 -- scripts/common.sh@340 -- # ver2_l=1 00:29:04.942 17:07:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:04.942 17:07:53 -- scripts/common.sh@343 -- # case "$op" in 00:29:04.942 17:07:53 -- scripts/common.sh@344 -- # : 1 00:29:04.942 17:07:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:04.942 17:07:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:04.942 17:07:53 -- scripts/common.sh@364 -- # decimal 1 00:29:04.942 17:07:53 -- scripts/common.sh@352 -- # local d=1 00:29:04.942 17:07:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:04.942 17:07:53 -- scripts/common.sh@354 -- # echo 1 00:29:04.942 17:07:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:04.942 17:07:53 -- scripts/common.sh@365 -- # decimal 2 00:29:04.942 17:07:53 -- scripts/common.sh@352 -- # local d=2 00:29:04.942 17:07:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:04.942 17:07:53 -- scripts/common.sh@354 -- # echo 2 00:29:04.942 17:07:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:04.942 17:07:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:04.942 17:07:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:04.942 17:07:53 -- scripts/common.sh@367 -- # return 0 00:29:04.942 17:07:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:04.942 17:07:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:04.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.942 --rc genhtml_branch_coverage=1 00:29:04.942 --rc genhtml_function_coverage=1 00:29:04.942 --rc genhtml_legend=1 00:29:04.942 --rc geninfo_all_blocks=1 00:29:04.942 --rc geninfo_unexecuted_blocks=1 00:29:04.942 00:29:04.942 ' 00:29:04.942 17:07:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:04.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.942 --rc genhtml_branch_coverage=1 00:29:04.942 --rc genhtml_function_coverage=1 00:29:04.942 --rc genhtml_legend=1 00:29:04.942 --rc geninfo_all_blocks=1 00:29:04.942 --rc geninfo_unexecuted_blocks=1 00:29:04.942 00:29:04.942 ' 00:29:04.942 17:07:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:04.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.942 --rc genhtml_branch_coverage=1 00:29:04.942 --rc genhtml_function_coverage=1 00:29:04.942 --rc genhtml_legend=1 00:29:04.942 --rc geninfo_all_blocks=1 00:29:04.942 --rc geninfo_unexecuted_blocks=1 00:29:04.942 00:29:04.942 ' 00:29:04.942 17:07:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:04.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.942 --rc genhtml_branch_coverage=1 00:29:04.942 --rc genhtml_function_coverage=1 00:29:04.942 --rc genhtml_legend=1 00:29:04.942 --rc geninfo_all_blocks=1 00:29:04.942 --rc geninfo_unexecuted_blocks=1 00:29:04.942 00:29:04.942 ' 00:29:04.942 17:07:53 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:04.942 17:07:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.942 17:07:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.942 17:07:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.942 17:07:53 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:04.942 17:07:53 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:04.942 17:07:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:04.942 17:07:53 -- paths/export.sh@5 -- # export PATH 00:29:04.942 17:07:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:04.942 17:07:53 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:29:04.942 17:07:53 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:29:04.942 17:07:53 -- dd/sparse.sh@110 -- # file1=file_zero1 00:29:04.943 17:07:53 -- dd/sparse.sh@111 -- # file2=file_zero2 00:29:04.943 17:07:53 -- dd/sparse.sh@112 -- # file3=file_zero3 00:29:04.943 17:07:53 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:29:04.943 17:07:53 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:29:04.943 17:07:53 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:29:04.943 17:07:53 -- dd/sparse.sh@118 -- # prepare 00:29:04.943 17:07:53 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:29:04.943 17:07:53 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:29:04.943 1+0 records in 00:29:04.943 1+0 records out 00:29:04.943 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0101957 s, 411 MB/s 00:29:04.943 17:07:53 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:29:04.943 1+0 records in 00:29:04.943 1+0 records out 00:29:04.943 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00787253 s, 533 MB/s 00:29:04.943 17:07:53 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:29:04.943 1+0 records in 00:29:04.943 1+0 records out 00:29:04.943 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00726046 s, 578 MB/s 00:29:04.943 17:07:53 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:29:04.943 17:07:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:04.943 17:07:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:04.943 17:07:53 -- common/autotest_common.sh@10 -- # set +x 00:29:04.943 ************************************ 00:29:04.943 START TEST dd_sparse_file_to_file 00:29:04.943 ************************************ 00:29:04.943 17:07:53 -- common/autotest_common.sh@1114 -- # file_to_file 00:29:04.943 17:07:53 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:29:04.943 17:07:53 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:29:04.943 17:07:53 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:29:04.943 17:07:53 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:29:04.943 17:07:53 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:29:04.943 17:07:53 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:29:04.943 17:07:53 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:29:04.943 17:07:53 -- dd/sparse.sh@41 -- # gen_conf 00:29:04.943 17:07:53 -- dd/common.sh@31 -- # xtrace_disable 00:29:04.943 17:07:53 -- common/autotest_common.sh@10 -- # set +x 00:29:04.943 { 00:29:04.943 "subsystems": [ 00:29:04.943 { 00:29:04.943 "subsystem": "bdev", 00:29:04.943 "config": [ 00:29:04.943 { 00:29:04.943 "params": { 00:29:04.943 "block_size": 4096, 00:29:04.943 "filename": "dd_sparse_aio_disk", 00:29:04.943 "name": "dd_aio" 00:29:04.943 }, 00:29:04.943 "method": "bdev_aio_create" 00:29:04.943 }, 00:29:04.943 { 00:29:04.943 "params": { 00:29:04.943 "lvs_name": "dd_lvstore", 00:29:04.943 "bdev_name": "dd_aio" 00:29:04.943 }, 00:29:04.943 "method": "bdev_lvol_create_lvstore" 00:29:04.943 }, 00:29:04.943 { 00:29:04.943 "method": "bdev_wait_for_examine" 00:29:04.943 } 00:29:04.943 ] 00:29:04.943 } 00:29:04.943 ] 00:29:04.943 } 00:29:04.943 [2024-11-05 17:07:53.828810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:04.943 [2024-11-05 17:07:53.828993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135943 ] 00:29:05.201 [2024-11-05 17:07:53.995430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.459 [2024-11-05 17:07:54.171197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.717  [2024-11-05T17:07:55.528Z] Copying: 12/36 [MB] (average 1200 MBps) 00:29:06.651 00:29:06.909 17:07:55 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:29:06.909 17:07:55 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:29:06.909 17:07:55 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:29:06.909 17:07:55 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:29:06.909 17:07:55 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:29:06.909 17:07:55 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:29:06.909 17:07:55 -- dd/sparse.sh@52 -- # stat1_b=24576 00:29:06.909 17:07:55 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:29:06.909 17:07:55 -- dd/sparse.sh@53 -- # stat2_b=24576 00:29:06.909 17:07:55 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:29:06.909 00:29:06.909 real 0m1.820s 00:29:06.909 user 0m1.409s 00:29:06.909 sys 0m0.269s 00:29:06.909 17:07:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:06.909 ************************************ 00:29:06.909 END TEST dd_sparse_file_to_file 00:29:06.909 ************************************ 00:29:06.909 17:07:55 -- common/autotest_common.sh@10 -- # set +x 00:29:06.909 17:07:55 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:29:06.909 17:07:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:06.909 17:07:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:06.909 17:07:55 -- common/autotest_common.sh@10 -- # set +x 00:29:06.909 ************************************ 00:29:06.909 START TEST dd_sparse_file_to_bdev 00:29:06.909 ************************************ 00:29:06.909 17:07:55 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:29:06.909 17:07:55 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:29:06.909 17:07:55 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:29:06.909 17:07:55 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:29:06.909 17:07:55 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:29:06.909 17:07:55 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:29:06.909 17:07:55 -- dd/sparse.sh@73 -- # gen_conf 00:29:06.909 17:07:55 -- dd/common.sh@31 -- # xtrace_disable 00:29:06.909 17:07:55 -- common/autotest_common.sh@10 -- # set +x 00:29:06.909 { 00:29:06.909 "subsystems": [ 00:29:06.909 { 00:29:06.909 "subsystem": "bdev", 00:29:06.909 "config": [ 00:29:06.909 { 00:29:06.909 "params": { 00:29:06.909 "block_size": 4096, 00:29:06.909 "filename": "dd_sparse_aio_disk", 00:29:06.909 "name": "dd_aio" 00:29:06.909 }, 00:29:06.909 "method": "bdev_aio_create" 00:29:06.909 }, 00:29:06.909 { 00:29:06.909 "params": { 00:29:06.909 "lvs_name": "dd_lvstore", 00:29:06.909 "lvol_name": "dd_lvol", 00:29:06.909 "size": 37748736, 00:29:06.909 "thin_provision": true 00:29:06.909 }, 00:29:06.909 "method": "bdev_lvol_create" 00:29:06.909 }, 00:29:06.909 { 00:29:06.909 "method": "bdev_wait_for_examine" 00:29:06.909 } 00:29:06.909 ] 00:29:06.909 } 00:29:06.909 ] 00:29:06.909 } 00:29:06.909 [2024-11-05 17:07:55.692677] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:06.909 [2024-11-05 17:07:55.692902] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136011 ] 00:29:07.167 [2024-11-05 17:07:55.859387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.167 [2024-11-05 17:07:56.015253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.426 [2024-11-05 17:07:56.271280] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:29:07.684  [2024-11-05T17:07:56.561Z] Copying: 12/36 [MB] (average 387 MBps)[2024-11-05 17:07:56.335112] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:29:08.623 00:29:08.623 00:29:08.623 00:29:08.623 real 0m1.737s 00:29:08.623 user 0m1.363s 00:29:08.623 sys 0m0.265s 00:29:08.623 17:07:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:08.623 17:07:57 -- common/autotest_common.sh@10 -- # set +x 00:29:08.623 ************************************ 00:29:08.623 END TEST dd_sparse_file_to_bdev 00:29:08.623 ************************************ 00:29:08.623 17:07:57 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:29:08.623 17:07:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:08.623 17:07:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:08.623 17:07:57 -- common/autotest_common.sh@10 -- # set +x 00:29:08.623 ************************************ 00:29:08.623 START TEST dd_sparse_bdev_to_file 00:29:08.623 ************************************ 00:29:08.623 17:07:57 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:29:08.623 17:07:57 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:29:08.623 17:07:57 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:29:08.623 17:07:57 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:29:08.623 17:07:57 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:29:08.623 17:07:57 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:29:08.623 17:07:57 -- dd/sparse.sh@91 -- # gen_conf 00:29:08.623 17:07:57 -- dd/common.sh@31 -- # xtrace_disable 00:29:08.623 17:07:57 -- common/autotest_common.sh@10 -- # set +x 00:29:08.623 [2024-11-05 17:07:57.482789] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:08.623 [2024-11-05 17:07:57.482957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136056 ] 00:29:08.623 { 00:29:08.623 "subsystems": [ 00:29:08.623 { 00:29:08.623 "subsystem": "bdev", 00:29:08.623 "config": [ 00:29:08.623 { 00:29:08.623 "params": { 00:29:08.623 "block_size": 4096, 00:29:08.623 "filename": "dd_sparse_aio_disk", 00:29:08.623 "name": "dd_aio" 00:29:08.623 }, 00:29:08.623 "method": "bdev_aio_create" 00:29:08.623 }, 00:29:08.623 { 00:29:08.623 "method": "bdev_wait_for_examine" 00:29:08.623 } 00:29:08.624 ] 00:29:08.624 } 00:29:08.624 ] 00:29:08.624 } 00:29:08.881 [2024-11-05 17:07:57.636079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.140 [2024-11-05 17:07:57.796934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.398  [2024-11-05T17:07:59.208Z] Copying: 12/36 [MB] (average 1090 MBps) 00:29:10.331 00:29:10.331 17:07:59 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:29:10.331 17:07:59 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:29:10.331 17:07:59 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:29:10.331 17:07:59 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:29:10.331 17:07:59 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:29:10.331 17:07:59 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:29:10.331 17:07:59 -- dd/sparse.sh@102 -- # stat2_b=24576 00:29:10.331 17:07:59 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:29:10.331 17:07:59 -- dd/sparse.sh@103 -- # stat3_b=24576 00:29:10.331 17:07:59 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:29:10.331 00:29:10.331 real 0m1.727s 00:29:10.331 user 0m1.373s 00:29:10.331 sys 0m0.248s 00:29:10.331 17:07:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:10.331 ************************************ 00:29:10.331 END TEST dd_sparse_bdev_to_file 00:29:10.331 17:07:59 -- common/autotest_common.sh@10 -- # set +x 00:29:10.331 ************************************ 00:29:10.331 17:07:59 -- dd/sparse.sh@1 -- # cleanup 00:29:10.331 17:07:59 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:29:10.331 17:07:59 -- dd/sparse.sh@12 -- # rm file_zero1 00:29:10.331 17:07:59 -- dd/sparse.sh@13 -- # rm file_zero2 00:29:10.331 17:07:59 -- dd/sparse.sh@14 -- # rm file_zero3 00:29:10.331 ************************************ 00:29:10.331 END TEST spdk_dd_sparse 00:29:10.331 ************************************ 00:29:10.331 00:29:10.331 real 0m5.677s 00:29:10.331 user 0m4.356s 00:29:10.331 sys 0m0.956s 00:29:10.331 17:07:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:10.331 17:07:59 -- common/autotest_common.sh@10 -- # set +x 00:29:10.590 17:07:59 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:29:10.590 17:07:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:10.590 17:07:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:10.590 17:07:59 -- common/autotest_common.sh@10 -- # set +x 00:29:10.590 ************************************ 00:29:10.590 START TEST spdk_dd_negative 00:29:10.590 ************************************ 00:29:10.590 17:07:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:29:10.590 * Looking for test storage... 00:29:10.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:10.590 17:07:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:10.590 17:07:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:10.590 17:07:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:10.590 17:07:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:10.590 17:07:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:10.590 17:07:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:10.590 17:07:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:10.590 17:07:59 -- scripts/common.sh@335 -- # IFS=.-: 00:29:10.590 17:07:59 -- scripts/common.sh@335 -- # read -ra ver1 00:29:10.590 17:07:59 -- scripts/common.sh@336 -- # IFS=.-: 00:29:10.590 17:07:59 -- scripts/common.sh@336 -- # read -ra ver2 00:29:10.590 17:07:59 -- scripts/common.sh@337 -- # local 'op=<' 00:29:10.590 17:07:59 -- scripts/common.sh@339 -- # ver1_l=2 00:29:10.590 17:07:59 -- scripts/common.sh@340 -- # ver2_l=1 00:29:10.590 17:07:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:10.590 17:07:59 -- scripts/common.sh@343 -- # case "$op" in 00:29:10.590 17:07:59 -- scripts/common.sh@344 -- # : 1 00:29:10.590 17:07:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:10.590 17:07:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:10.590 17:07:59 -- scripts/common.sh@364 -- # decimal 1 00:29:10.590 17:07:59 -- scripts/common.sh@352 -- # local d=1 00:29:10.590 17:07:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:10.590 17:07:59 -- scripts/common.sh@354 -- # echo 1 00:29:10.590 17:07:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:10.590 17:07:59 -- scripts/common.sh@365 -- # decimal 2 00:29:10.590 17:07:59 -- scripts/common.sh@352 -- # local d=2 00:29:10.590 17:07:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:10.590 17:07:59 -- scripts/common.sh@354 -- # echo 2 00:29:10.590 17:07:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:10.590 17:07:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:10.590 17:07:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:10.590 17:07:59 -- scripts/common.sh@367 -- # return 0 00:29:10.590 17:07:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:10.590 17:07:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:10.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.590 --rc genhtml_branch_coverage=1 00:29:10.590 --rc genhtml_function_coverage=1 00:29:10.590 --rc genhtml_legend=1 00:29:10.590 --rc geninfo_all_blocks=1 00:29:10.590 --rc geninfo_unexecuted_blocks=1 00:29:10.590 00:29:10.590 ' 00:29:10.590 17:07:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:10.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.590 --rc genhtml_branch_coverage=1 00:29:10.590 --rc genhtml_function_coverage=1 00:29:10.590 --rc genhtml_legend=1 00:29:10.590 --rc geninfo_all_blocks=1 00:29:10.590 --rc geninfo_unexecuted_blocks=1 00:29:10.590 00:29:10.590 ' 00:29:10.590 17:07:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:10.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.590 --rc genhtml_branch_coverage=1 00:29:10.590 --rc genhtml_function_coverage=1 00:29:10.590 --rc genhtml_legend=1 00:29:10.590 --rc geninfo_all_blocks=1 00:29:10.590 --rc geninfo_unexecuted_blocks=1 00:29:10.590 00:29:10.590 ' 00:29:10.590 17:07:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:10.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.590 --rc genhtml_branch_coverage=1 00:29:10.590 --rc genhtml_function_coverage=1 00:29:10.590 --rc genhtml_legend=1 00:29:10.590 --rc geninfo_all_blocks=1 00:29:10.590 --rc geninfo_unexecuted_blocks=1 00:29:10.590 00:29:10.590 ' 00:29:10.590 17:07:59 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:10.590 17:07:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.590 17:07:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.590 17:07:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.590 17:07:59 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:10.590 17:07:59 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:10.590 17:07:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:10.590 17:07:59 -- paths/export.sh@5 -- # export PATH 00:29:10.590 17:07:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:10.590 17:07:59 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:10.590 17:07:59 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:10.590 17:07:59 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:10.590 17:07:59 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:10.590 17:07:59 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:29:10.590 17:07:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:10.590 17:07:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:10.590 17:07:59 -- common/autotest_common.sh@10 -- # set +x 00:29:10.590 ************************************ 00:29:10.590 START TEST dd_invalid_arguments 00:29:10.590 ************************************ 00:29:10.590 17:07:59 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:29:10.590 17:07:59 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:10.590 17:07:59 -- common/autotest_common.sh@650 -- # local es=0 00:29:10.590 17:07:59 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:10.590 17:07:59 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.590 17:07:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:10.590 17:07:59 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.590 17:07:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:10.590 17:07:59 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.590 17:07:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:10.590 17:07:59 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.590 17:07:59 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:10.590 17:07:59 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:10.849 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:29:10.849 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:29:10.849 options: 00:29:10.849 -c, --config JSON config file (default none) 00:29:10.849 --json JSON config file (default none) 00:29:10.849 --json-ignore-init-errors 00:29:10.849 don't exit on invalid config entry 00:29:10.849 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:29:10.849 -g, --single-file-segments 00:29:10.849 force creating just one hugetlbfs file 00:29:10.849 -h, --help show this usage 00:29:10.849 -i, --shm-id shared memory ID (optional) 00:29:10.849 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:29:10.849 --lcores lcore to CPU mapping list. The list is in the format: 00:29:10.849 [<,lcores[@CPUs]>...] 00:29:10.849 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:29:10.849 Within the group, '-' is used for range separator, 00:29:10.849 ',' is used for single number separator. 00:29:10.849 '( )' can be omitted for single element group, 00:29:10.849 '@' can be omitted if cpus and lcores have the same value 00:29:10.849 -n, --mem-channels channel number of memory channels used for DPDK 00:29:10.849 -p, --main-core main (primary) core for DPDK 00:29:10.849 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:29:10.849 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:29:10.849 --disable-cpumask-locks Disable CPU core lock files. 00:29:10.849 --silence-noticelog disable notice level logging to stderr 00:29:10.849 --msg-mempool-size global message memory pool size in count (default: 262143) 00:29:10.849 -u, --no-pci disable PCI access 00:29:10.849 --wait-for-rpc wait for RPCs to initialize subsystems 00:29:10.849 --max-delay maximum reactor delay (in microseconds) 00:29:10.849 -B, --pci-blocked pci addr to block (can be used more than once) 00:29:10.849 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:29:10.849 -R, --huge-unlink unlink huge files after initialization 00:29:10.849 -v, --version print SPDK version 00:29:10.849 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:29:10.849 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:29:10.849 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:29:10.849 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:29:10.849 Tracepoints vary in size and can use more than one trace entry. 00:29:10.849 --rpcs-allowed comma-separated list of permitted RPCS 00:29:10.849 --env-context Opaque context for use of the env implementation 00:29:10.849 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:29:10.849 --no-huge run without using hugepages 00:29:10.850 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:29:10.850 -e, --tpoint-group [:] 00:29:10.850 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:29:10.850 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:29:10.850 Groups and [2024-11-05 17:07:59.534612] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:29:10.850 masks can be combined (e.g. thread,bdev:0x1). 00:29:10.850 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:29:10.850 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:29:10.850 [--------- DD Options ---------] 00:29:10.850 --if Input file. Must specify either --if or --ib. 00:29:10.850 --ib Input bdev. Must specifier either --if or --ib 00:29:10.850 --of Output file. Must specify either --of or --ob. 00:29:10.850 --ob Output bdev. Must specify either --of or --ob. 00:29:10.850 --iflag Input file flags. 00:29:10.850 --oflag Output file flags. 00:29:10.850 --bs I/O unit size (default: 4096) 00:29:10.850 --qd Queue depth (default: 2) 00:29:10.850 --count I/O unit count. The number of I/O units to copy. (default: all) 00:29:10.850 --skip Skip this many I/O units at start of input. (default: 0) 00:29:10.850 --seek Skip this many I/O units at start of output. (default: 0) 00:29:10.850 --aio Force usage of AIO. (by default io_uring is used if available) 00:29:10.850 --sparse Enable hole skipping in input target 00:29:10.850 Available iflag and oflag values: 00:29:10.850 append - append mode 00:29:10.850 direct - use direct I/O for data 00:29:10.850 directory - fail unless a directory 00:29:10.850 dsync - use synchronized I/O for data 00:29:10.850 noatime - do not update access time 00:29:10.850 noctty - do not assign controlling terminal from file 00:29:10.850 nofollow - do not follow symlinks 00:29:10.850 nonblock - use non-blocking I/O 00:29:10.850 sync - use synchronized I/O for data and metadata 00:29:10.850 17:07:59 -- common/autotest_common.sh@653 -- # es=2 00:29:10.850 17:07:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:10.850 17:07:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:10.850 17:07:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:10.850 00:29:10.850 real 0m0.112s 00:29:10.850 user 0m0.067s 00:29:10.850 sys 0m0.045s 00:29:10.850 17:07:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:10.850 17:07:59 -- common/autotest_common.sh@10 -- # set +x 00:29:10.850 ************************************ 00:29:10.850 END TEST dd_invalid_arguments 00:29:10.850 ************************************ 00:29:10.850 17:07:59 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:29:10.850 17:07:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:10.850 17:07:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:10.850 17:07:59 -- common/autotest_common.sh@10 -- # set +x 00:29:10.850 ************************************ 00:29:10.850 START TEST dd_double_input 00:29:10.850 ************************************ 00:29:10.850 17:07:59 -- common/autotest_common.sh@1114 -- # double_input 00:29:10.850 17:07:59 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:10.850 17:07:59 -- common/autotest_common.sh@650 -- # local es=0 00:29:10.850 17:07:59 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:10.850 17:07:59 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.850 17:07:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:10.850 17:07:59 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.850 17:07:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:10.850 17:07:59 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.850 17:07:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:10.850 17:07:59 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.850 17:07:59 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:10.850 17:07:59 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:10.850 [2024-11-05 17:07:59.690560] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:29:10.850 17:07:59 -- common/autotest_common.sh@653 -- # es=22 00:29:10.850 17:07:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:10.850 17:07:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:10.850 17:07:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:10.850 00:29:10.850 real 0m0.104s 00:29:10.850 user 0m0.043s 00:29:10.850 sys 0m0.060s 00:29:10.850 17:07:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:10.850 17:07:59 -- common/autotest_common.sh@10 -- # set +x 00:29:10.850 ************************************ 00:29:10.850 END TEST dd_double_input 00:29:10.850 ************************************ 00:29:11.108 17:07:59 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:29:11.108 17:07:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:11.108 17:07:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:11.108 17:07:59 -- common/autotest_common.sh@10 -- # set +x 00:29:11.108 ************************************ 00:29:11.108 START TEST dd_double_output 00:29:11.108 ************************************ 00:29:11.108 17:07:59 -- common/autotest_common.sh@1114 -- # double_output 00:29:11.108 17:07:59 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:11.108 17:07:59 -- common/autotest_common.sh@650 -- # local es=0 00:29:11.108 17:07:59 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:11.108 17:07:59 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.108 17:07:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.108 17:07:59 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.108 17:07:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.108 17:07:59 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.108 17:07:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.108 17:07:59 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.108 17:07:59 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:11.108 17:07:59 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:11.108 [2024-11-05 17:07:59.854655] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:29:11.108 17:07:59 -- common/autotest_common.sh@653 -- # es=22 00:29:11.108 17:07:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:11.108 17:07:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:11.108 17:07:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:11.108 00:29:11.108 real 0m0.107s 00:29:11.108 user 0m0.058s 00:29:11.108 sys 0m0.049s 00:29:11.108 17:07:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:11.108 17:07:59 -- common/autotest_common.sh@10 -- # set +x 00:29:11.108 ************************************ 00:29:11.108 END TEST dd_double_output 00:29:11.108 ************************************ 00:29:11.108 17:07:59 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:29:11.108 17:07:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:11.108 17:07:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:11.108 17:07:59 -- common/autotest_common.sh@10 -- # set +x 00:29:11.108 ************************************ 00:29:11.108 START TEST dd_no_input 00:29:11.108 ************************************ 00:29:11.108 17:07:59 -- common/autotest_common.sh@1114 -- # no_input 00:29:11.108 17:07:59 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:11.108 17:07:59 -- common/autotest_common.sh@650 -- # local es=0 00:29:11.108 17:07:59 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:11.108 17:07:59 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.108 17:07:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.108 17:07:59 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.108 17:07:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.109 17:07:59 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.109 17:07:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.109 17:07:59 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.109 17:07:59 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:11.109 17:07:59 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:11.367 [2024-11-05 17:08:00.019496] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:29:11.367 17:08:00 -- common/autotest_common.sh@653 -- # es=22 00:29:11.367 17:08:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:11.367 17:08:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:11.367 17:08:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:11.367 00:29:11.367 real 0m0.114s 00:29:11.367 user 0m0.049s 00:29:11.367 sys 0m0.066s 00:29:11.367 17:08:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:11.367 17:08:00 -- common/autotest_common.sh@10 -- # set +x 00:29:11.367 ************************************ 00:29:11.367 END TEST dd_no_input 00:29:11.367 ************************************ 00:29:11.367 17:08:00 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:29:11.367 17:08:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:11.367 17:08:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:11.367 17:08:00 -- common/autotest_common.sh@10 -- # set +x 00:29:11.367 ************************************ 00:29:11.367 START TEST dd_no_output 00:29:11.367 ************************************ 00:29:11.367 17:08:00 -- common/autotest_common.sh@1114 -- # no_output 00:29:11.367 17:08:00 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:11.367 17:08:00 -- common/autotest_common.sh@650 -- # local es=0 00:29:11.367 17:08:00 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:11.367 17:08:00 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.367 17:08:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.367 17:08:00 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.367 17:08:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.367 17:08:00 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.367 17:08:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.367 17:08:00 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.367 17:08:00 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:11.367 17:08:00 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:11.367 [2024-11-05 17:08:00.182201] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:29:11.367 17:08:00 -- common/autotest_common.sh@653 -- # es=22 00:29:11.367 17:08:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:11.367 17:08:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:11.367 17:08:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:11.367 00:29:11.367 real 0m0.113s 00:29:11.367 user 0m0.056s 00:29:11.367 sys 0m0.058s 00:29:11.367 17:08:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:11.367 17:08:00 -- common/autotest_common.sh@10 -- # set +x 00:29:11.367 ************************************ 00:29:11.367 END TEST dd_no_output 00:29:11.367 ************************************ 00:29:11.626 17:08:00 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:29:11.626 17:08:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:11.626 17:08:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:11.626 17:08:00 -- common/autotest_common.sh@10 -- # set +x 00:29:11.626 ************************************ 00:29:11.626 START TEST dd_wrong_blocksize 00:29:11.626 ************************************ 00:29:11.626 17:08:00 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:29:11.626 17:08:00 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:11.626 17:08:00 -- common/autotest_common.sh@650 -- # local es=0 00:29:11.626 17:08:00 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:11.626 17:08:00 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.626 17:08:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.626 17:08:00 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.626 17:08:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.626 17:08:00 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.626 17:08:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.626 17:08:00 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.626 17:08:00 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:11.626 17:08:00 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:11.626 [2024-11-05 17:08:00.354000] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:29:11.626 17:08:00 -- common/autotest_common.sh@653 -- # es=22 00:29:11.626 17:08:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:11.626 17:08:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:11.626 17:08:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:11.626 00:29:11.626 real 0m0.113s 00:29:11.626 user 0m0.058s 00:29:11.626 sys 0m0.056s 00:29:11.626 17:08:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:11.626 17:08:00 -- common/autotest_common.sh@10 -- # set +x 00:29:11.626 ************************************ 00:29:11.626 END TEST dd_wrong_blocksize 00:29:11.626 ************************************ 00:29:11.626 17:08:00 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:29:11.626 17:08:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:11.626 17:08:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:11.626 17:08:00 -- common/autotest_common.sh@10 -- # set +x 00:29:11.626 ************************************ 00:29:11.626 START TEST dd_smaller_blocksize 00:29:11.626 ************************************ 00:29:11.626 17:08:00 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:29:11.626 17:08:00 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:11.626 17:08:00 -- common/autotest_common.sh@650 -- # local es=0 00:29:11.626 17:08:00 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:11.626 17:08:00 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.626 17:08:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.626 17:08:00 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.626 17:08:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.626 17:08:00 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.626 17:08:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.626 17:08:00 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:11.626 17:08:00 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:11.626 17:08:00 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:11.907 [2024-11-05 17:08:00.529070] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:11.907 [2024-11-05 17:08:00.529297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136328 ] 00:29:11.907 [2024-11-05 17:08:00.703562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.182 [2024-11-05 17:08:00.937635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.748 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:29:12.748 [2024-11-05 17:08:01.484899] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:29:12.748 [2024-11-05 17:08:01.485008] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:13.315 [2024-11-05 17:08:02.076638] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:13.573 17:08:02 -- common/autotest_common.sh@653 -- # es=244 00:29:13.573 17:08:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:13.573 17:08:02 -- common/autotest_common.sh@662 -- # es=116 00:29:13.573 17:08:02 -- common/autotest_common.sh@663 -- # case "$es" in 00:29:13.573 17:08:02 -- common/autotest_common.sh@670 -- # es=1 00:29:13.573 17:08:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:13.573 00:29:13.573 real 0m1.939s 00:29:13.573 user 0m1.395s 00:29:13.573 sys 0m0.444s 00:29:13.573 17:08:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:13.573 17:08:02 -- common/autotest_common.sh@10 -- # set +x 00:29:13.573 ************************************ 00:29:13.573 END TEST dd_smaller_blocksize 00:29:13.573 ************************************ 00:29:13.573 17:08:02 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:29:13.573 17:08:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:13.573 17:08:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:13.573 17:08:02 -- common/autotest_common.sh@10 -- # set +x 00:29:13.573 ************************************ 00:29:13.573 START TEST dd_invalid_count 00:29:13.573 ************************************ 00:29:13.573 17:08:02 -- common/autotest_common.sh@1114 -- # invalid_count 00:29:13.573 17:08:02 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:13.573 17:08:02 -- common/autotest_common.sh@650 -- # local es=0 00:29:13.573 17:08:02 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:13.574 17:08:02 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:13.574 17:08:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:13.574 17:08:02 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:13.574 17:08:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:13.574 17:08:02 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:13.574 17:08:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:13.574 17:08:02 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:13.574 17:08:02 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:13.574 17:08:02 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:13.832 [2024-11-05 17:08:02.517414] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:29:13.832 17:08:02 -- common/autotest_common.sh@653 -- # es=22 00:29:13.832 17:08:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:13.832 17:08:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:13.832 17:08:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:13.832 00:29:13.832 real 0m0.113s 00:29:13.832 user 0m0.062s 00:29:13.832 sys 0m0.051s 00:29:13.832 17:08:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:13.832 17:08:02 -- common/autotest_common.sh@10 -- # set +x 00:29:13.832 ************************************ 00:29:13.832 END TEST dd_invalid_count 00:29:13.832 ************************************ 00:29:13.832 17:08:02 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:29:13.832 17:08:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:13.832 17:08:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:13.832 17:08:02 -- common/autotest_common.sh@10 -- # set +x 00:29:13.832 ************************************ 00:29:13.832 START TEST dd_invalid_oflag 00:29:13.832 ************************************ 00:29:13.832 17:08:02 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:29:13.832 17:08:02 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:13.832 17:08:02 -- common/autotest_common.sh@650 -- # local es=0 00:29:13.833 17:08:02 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:13.833 17:08:02 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:13.833 17:08:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:13.833 17:08:02 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:13.833 17:08:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:13.833 17:08:02 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:13.833 17:08:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:13.833 17:08:02 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:13.833 17:08:02 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:13.833 17:08:02 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:13.833 [2024-11-05 17:08:02.675124] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:29:13.833 17:08:02 -- common/autotest_common.sh@653 -- # es=22 00:29:13.833 17:08:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:13.833 17:08:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:13.833 17:08:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:13.833 00:29:13.833 real 0m0.098s 00:29:13.833 user 0m0.055s 00:29:13.833 sys 0m0.044s 00:29:13.833 17:08:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:13.833 17:08:02 -- common/autotest_common.sh@10 -- # set +x 00:29:13.833 ************************************ 00:29:13.833 END TEST dd_invalid_oflag 00:29:13.833 ************************************ 00:29:14.091 17:08:02 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:29:14.091 17:08:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:14.091 17:08:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:14.091 17:08:02 -- common/autotest_common.sh@10 -- # set +x 00:29:14.091 ************************************ 00:29:14.091 START TEST dd_invalid_iflag 00:29:14.091 ************************************ 00:29:14.091 17:08:02 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:29:14.091 17:08:02 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:14.091 17:08:02 -- common/autotest_common.sh@650 -- # local es=0 00:29:14.092 17:08:02 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:14.092 17:08:02 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:14.092 17:08:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:14.092 17:08:02 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:14.092 17:08:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:14.092 17:08:02 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:14.092 17:08:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:14.092 17:08:02 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:14.092 17:08:02 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:14.092 17:08:02 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:14.092 [2024-11-05 17:08:02.838153] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:29:14.092 17:08:02 -- common/autotest_common.sh@653 -- # es=22 00:29:14.092 17:08:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:14.092 17:08:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:14.092 17:08:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:14.092 00:29:14.092 real 0m0.114s 00:29:14.092 user 0m0.038s 00:29:14.092 sys 0m0.077s 00:29:14.092 17:08:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:14.092 ************************************ 00:29:14.092 17:08:02 -- common/autotest_common.sh@10 -- # set +x 00:29:14.092 END TEST dd_invalid_iflag 00:29:14.092 ************************************ 00:29:14.092 17:08:02 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:29:14.092 17:08:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:14.092 17:08:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:14.092 17:08:02 -- common/autotest_common.sh@10 -- # set +x 00:29:14.092 ************************************ 00:29:14.092 START TEST dd_unknown_flag 00:29:14.092 ************************************ 00:29:14.092 17:08:02 -- common/autotest_common.sh@1114 -- # unknown_flag 00:29:14.092 17:08:02 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:14.092 17:08:02 -- common/autotest_common.sh@650 -- # local es=0 00:29:14.092 17:08:02 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:14.092 17:08:02 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:14.092 17:08:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:14.092 17:08:02 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:14.092 17:08:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:14.092 17:08:02 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:14.092 17:08:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:14.092 17:08:02 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:14.092 17:08:02 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:14.092 17:08:02 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:14.350 [2024-11-05 17:08:03.003822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:14.350 [2024-11-05 17:08:03.004050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136456 ] 00:29:14.350 [2024-11-05 17:08:03.172058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.609 [2024-11-05 17:08:03.412013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.867 [2024-11-05 17:08:03.661590] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:29:14.867 [2024-11-05 17:08:03.661701] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:29:14.867 [2024-11-05 17:08:03.661730] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:29:14.867 [2024-11-05 17:08:03.661782] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:15.433 [2024-11-05 17:08:04.241462] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:15.691 17:08:04 -- common/autotest_common.sh@653 -- # es=236 00:29:15.691 17:08:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:15.691 17:08:04 -- common/autotest_common.sh@662 -- # es=108 00:29:15.691 17:08:04 -- common/autotest_common.sh@663 -- # case "$es" in 00:29:15.691 17:08:04 -- common/autotest_common.sh@670 -- # es=1 00:29:15.691 17:08:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:15.691 00:29:15.691 real 0m1.636s 00:29:15.691 user 0m1.296s 00:29:15.691 sys 0m0.241s 00:29:15.691 17:08:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:15.691 17:08:04 -- common/autotest_common.sh@10 -- # set +x 00:29:15.691 ************************************ 00:29:15.691 END TEST dd_unknown_flag 00:29:15.691 ************************************ 00:29:15.949 17:08:04 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:29:15.949 17:08:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:15.949 17:08:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:15.949 17:08:04 -- common/autotest_common.sh@10 -- # set +x 00:29:15.949 ************************************ 00:29:15.949 START TEST dd_invalid_json 00:29:15.949 ************************************ 00:29:15.949 17:08:04 -- common/autotest_common.sh@1114 -- # invalid_json 00:29:15.949 17:08:04 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:15.949 17:08:04 -- common/autotest_common.sh@650 -- # local es=0 00:29:15.949 17:08:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:15.949 17:08:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:15.949 17:08:04 -- dd/negative_dd.sh@95 -- # : 00:29:15.949 17:08:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:15.949 17:08:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:15.949 17:08:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:15.949 17:08:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:15.949 17:08:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:15.949 17:08:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:15.949 17:08:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:15.949 17:08:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:15.949 [2024-11-05 17:08:04.698141] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:15.949 [2024-11-05 17:08:04.698333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136503 ] 00:29:16.207 [2024-11-05 17:08:04.871144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.207 [2024-11-05 17:08:05.085986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.207 [2024-11-05 17:08:05.086178] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:29:16.207 [2024-11-05 17:08:05.086231] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:16.207 [2024-11-05 17:08:05.086299] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:16.772 17:08:05 -- common/autotest_common.sh@653 -- # es=234 00:29:16.772 17:08:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:16.772 17:08:05 -- common/autotest_common.sh@662 -- # es=106 00:29:16.772 17:08:05 -- common/autotest_common.sh@663 -- # case "$es" in 00:29:16.772 17:08:05 -- common/autotest_common.sh@670 -- # es=1 00:29:16.772 17:08:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:16.772 00:29:16.772 real 0m0.766s 00:29:16.772 user 0m0.539s 00:29:16.772 sys 0m0.129s 00:29:16.772 17:08:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:16.772 17:08:05 -- common/autotest_common.sh@10 -- # set +x 00:29:16.772 ************************************ 00:29:16.772 END TEST dd_invalid_json 00:29:16.772 ************************************ 00:29:16.772 ************************************ 00:29:16.772 END TEST spdk_dd_negative 00:29:16.772 ************************************ 00:29:16.772 00:29:16.772 real 0m6.165s 00:29:16.772 user 0m4.156s 00:29:16.772 sys 0m1.680s 00:29:16.772 17:08:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:16.772 17:08:05 -- common/autotest_common.sh@10 -- # set +x 00:29:16.772 00:29:16.772 real 2m19.057s 00:29:16.772 user 1m47.523s 00:29:16.772 sys 0m21.423s 00:29:16.772 17:08:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:16.772 17:08:05 -- common/autotest_common.sh@10 -- # set +x 00:29:16.772 ************************************ 00:29:16.773 END TEST spdk_dd 00:29:16.773 ************************************ 00:29:16.773 17:08:05 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:29:16.773 17:08:05 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:29:16.773 17:08:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:16.773 17:08:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:16.773 17:08:05 -- common/autotest_common.sh@10 -- # set +x 00:29:16.773 ************************************ 00:29:16.773 START TEST blockdev_nvme 00:29:16.773 ************************************ 00:29:16.773 17:08:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:29:16.773 * Looking for test storage... 00:29:16.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:16.773 17:08:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:16.773 17:08:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:16.773 17:08:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:17.031 17:08:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:17.031 17:08:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:17.031 17:08:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:17.031 17:08:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:17.031 17:08:05 -- scripts/common.sh@335 -- # IFS=.-: 00:29:17.031 17:08:05 -- scripts/common.sh@335 -- # read -ra ver1 00:29:17.031 17:08:05 -- scripts/common.sh@336 -- # IFS=.-: 00:29:17.031 17:08:05 -- scripts/common.sh@336 -- # read -ra ver2 00:29:17.031 17:08:05 -- scripts/common.sh@337 -- # local 'op=<' 00:29:17.031 17:08:05 -- scripts/common.sh@339 -- # ver1_l=2 00:29:17.031 17:08:05 -- scripts/common.sh@340 -- # ver2_l=1 00:29:17.031 17:08:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:17.031 17:08:05 -- scripts/common.sh@343 -- # case "$op" in 00:29:17.031 17:08:05 -- scripts/common.sh@344 -- # : 1 00:29:17.031 17:08:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:17.031 17:08:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:17.031 17:08:05 -- scripts/common.sh@364 -- # decimal 1 00:29:17.031 17:08:05 -- scripts/common.sh@352 -- # local d=1 00:29:17.031 17:08:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:17.031 17:08:05 -- scripts/common.sh@354 -- # echo 1 00:29:17.031 17:08:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:17.031 17:08:05 -- scripts/common.sh@365 -- # decimal 2 00:29:17.031 17:08:05 -- scripts/common.sh@352 -- # local d=2 00:29:17.031 17:08:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:17.031 17:08:05 -- scripts/common.sh@354 -- # echo 2 00:29:17.031 17:08:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:17.031 17:08:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:17.031 17:08:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:17.031 17:08:05 -- scripts/common.sh@367 -- # return 0 00:29:17.031 17:08:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:17.031 17:08:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:17.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.031 --rc genhtml_branch_coverage=1 00:29:17.031 --rc genhtml_function_coverage=1 00:29:17.031 --rc genhtml_legend=1 00:29:17.031 --rc geninfo_all_blocks=1 00:29:17.031 --rc geninfo_unexecuted_blocks=1 00:29:17.031 00:29:17.031 ' 00:29:17.031 17:08:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:17.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.031 --rc genhtml_branch_coverage=1 00:29:17.031 --rc genhtml_function_coverage=1 00:29:17.031 --rc genhtml_legend=1 00:29:17.031 --rc geninfo_all_blocks=1 00:29:17.031 --rc geninfo_unexecuted_blocks=1 00:29:17.031 00:29:17.031 ' 00:29:17.031 17:08:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:17.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.031 --rc genhtml_branch_coverage=1 00:29:17.031 --rc genhtml_function_coverage=1 00:29:17.031 --rc genhtml_legend=1 00:29:17.031 --rc geninfo_all_blocks=1 00:29:17.031 --rc geninfo_unexecuted_blocks=1 00:29:17.031 00:29:17.031 ' 00:29:17.031 17:08:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:17.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.031 --rc genhtml_branch_coverage=1 00:29:17.031 --rc genhtml_function_coverage=1 00:29:17.031 --rc genhtml_legend=1 00:29:17.031 --rc geninfo_all_blocks=1 00:29:17.031 --rc geninfo_unexecuted_blocks=1 00:29:17.031 00:29:17.031 ' 00:29:17.031 17:08:05 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:17.031 17:08:05 -- bdev/nbd_common.sh@6 -- # set -e 00:29:17.031 17:08:05 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:17.031 17:08:05 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:17.032 17:08:05 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:17.032 17:08:05 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:17.032 17:08:05 -- bdev/blockdev.sh@18 -- # : 00:29:17.032 17:08:05 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:29:17.032 17:08:05 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:29:17.032 17:08:05 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:29:17.032 17:08:05 -- bdev/blockdev.sh@672 -- # uname -s 00:29:17.032 17:08:05 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:29:17.032 17:08:05 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:29:17.032 17:08:05 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:29:17.032 17:08:05 -- bdev/blockdev.sh@681 -- # crypto_device= 00:29:17.032 17:08:05 -- bdev/blockdev.sh@682 -- # dek= 00:29:17.032 17:08:05 -- bdev/blockdev.sh@683 -- # env_ctx= 00:29:17.032 17:08:05 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:29:17.032 17:08:05 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:29:17.032 17:08:05 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:29:17.032 17:08:05 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:29:17.032 17:08:05 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:29:17.032 17:08:05 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=136607 00:29:17.032 17:08:05 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:17.032 17:08:05 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:17.032 17:08:05 -- bdev/blockdev.sh@47 -- # waitforlisten 136607 00:29:17.032 17:08:05 -- common/autotest_common.sh@829 -- # '[' -z 136607 ']' 00:29:17.032 17:08:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.032 17:08:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:17.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.032 17:08:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.032 17:08:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:17.032 17:08:05 -- common/autotest_common.sh@10 -- # set +x 00:29:17.032 [2024-11-05 17:08:05.801361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:17.032 [2024-11-05 17:08:05.801568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136607 ] 00:29:17.293 [2024-11-05 17:08:05.969141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.293 [2024-11-05 17:08:06.127806] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:17.293 [2024-11-05 17:08:06.128069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.668 17:08:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:18.668 17:08:07 -- common/autotest_common.sh@862 -- # return 0 00:29:18.668 17:08:07 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:29:18.668 17:08:07 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:29:18.668 17:08:07 -- bdev/blockdev.sh@79 -- # local json 00:29:18.668 17:08:07 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:29:18.668 17:08:07 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:18.668 17:08:07 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:29:18.669 17:08:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.669 17:08:07 -- common/autotest_common.sh@10 -- # set +x 00:29:18.927 17:08:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.927 17:08:07 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:29:18.927 17:08:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.927 17:08:07 -- common/autotest_common.sh@10 -- # set +x 00:29:18.927 17:08:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.927 17:08:07 -- bdev/blockdev.sh@738 -- # cat 00:29:18.927 17:08:07 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:29:18.927 17:08:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.927 17:08:07 -- common/autotest_common.sh@10 -- # set +x 00:29:18.927 17:08:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.927 17:08:07 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:29:18.927 17:08:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.927 17:08:07 -- common/autotest_common.sh@10 -- # set +x 00:29:18.927 17:08:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.927 17:08:07 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:18.927 17:08:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.927 17:08:07 -- common/autotest_common.sh@10 -- # set +x 00:29:18.927 17:08:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.927 17:08:07 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:29:18.927 17:08:07 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:29:18.927 17:08:07 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:29:18.927 17:08:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.927 17:08:07 -- common/autotest_common.sh@10 -- # set +x 00:29:18.927 17:08:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.927 17:08:07 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:29:18.928 17:08:07 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e39019f9-93d6-42ff-a3ea-c5ade182e8da"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e39019f9-93d6-42ff-a3ea-c5ade182e8da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:29:18.928 17:08:07 -- bdev/blockdev.sh@747 -- # jq -r .name 00:29:18.928 17:08:07 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:29:18.928 17:08:07 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:29:18.928 17:08:07 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:29:18.928 17:08:07 -- bdev/blockdev.sh@752 -- # killprocess 136607 00:29:18.928 17:08:07 -- common/autotest_common.sh@936 -- # '[' -z 136607 ']' 00:29:18.928 17:08:07 -- common/autotest_common.sh@940 -- # kill -0 136607 00:29:18.928 17:08:07 -- common/autotest_common.sh@941 -- # uname 00:29:18.928 17:08:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:18.928 17:08:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136607 00:29:18.928 17:08:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:18.928 17:08:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:18.928 killing process with pid 136607 00:29:18.928 17:08:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136607' 00:29:18.928 17:08:07 -- common/autotest_common.sh@955 -- # kill 136607 00:29:18.928 17:08:07 -- common/autotest_common.sh@960 -- # wait 136607 00:29:20.833 17:08:09 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:20.833 17:08:09 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:20.833 17:08:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:29:20.833 17:08:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:20.833 17:08:09 -- common/autotest_common.sh@10 -- # set +x 00:29:20.833 ************************************ 00:29:20.833 START TEST bdev_hello_world 00:29:20.833 ************************************ 00:29:20.834 17:08:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:20.834 [2024-11-05 17:08:09.586057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:20.834 [2024-11-05 17:08:09.586238] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136699 ] 00:29:21.099 [2024-11-05 17:08:09.755294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.099 [2024-11-05 17:08:09.918754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.664 [2024-11-05 17:08:10.300432] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:21.664 [2024-11-05 17:08:10.300505] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:29:21.665 [2024-11-05 17:08:10.300551] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:21.665 [2024-11-05 17:08:10.303002] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:21.665 [2024-11-05 17:08:10.303614] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:21.665 [2024-11-05 17:08:10.303662] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:21.665 [2024-11-05 17:08:10.303951] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:21.665 00:29:21.665 [2024-11-05 17:08:10.303999] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:22.600 00:29:22.600 real 0m1.612s 00:29:22.600 user 0m1.272s 00:29:22.600 sys 0m0.240s 00:29:22.600 17:08:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:22.600 17:08:11 -- common/autotest_common.sh@10 -- # set +x 00:29:22.600 ************************************ 00:29:22.600 END TEST bdev_hello_world 00:29:22.600 ************************************ 00:29:22.600 17:08:11 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:29:22.600 17:08:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:22.600 17:08:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:22.600 17:08:11 -- common/autotest_common.sh@10 -- # set +x 00:29:22.600 ************************************ 00:29:22.600 START TEST bdev_bounds 00:29:22.600 ************************************ 00:29:22.600 17:08:11 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:29:22.600 17:08:11 -- bdev/blockdev.sh@288 -- # bdevio_pid=136744 00:29:22.600 17:08:11 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:22.600 Process bdevio pid: 136744 00:29:22.600 17:08:11 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 136744' 00:29:22.601 17:08:11 -- bdev/blockdev.sh@291 -- # waitforlisten 136744 00:29:22.601 17:08:11 -- common/autotest_common.sh@829 -- # '[' -z 136744 ']' 00:29:22.601 17:08:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.601 17:08:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:22.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.601 17:08:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.601 17:08:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:22.601 17:08:11 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:22.601 17:08:11 -- common/autotest_common.sh@10 -- # set +x 00:29:22.601 [2024-11-05 17:08:11.275668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:22.601 [2024-11-05 17:08:11.276149] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136744 ] 00:29:22.601 [2024-11-05 17:08:11.456407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:22.858 [2024-11-05 17:08:11.617526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.858 [2024-11-05 17:08:11.617668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.858 [2024-11-05 17:08:11.617665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:23.425 17:08:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:23.425 17:08:12 -- common/autotest_common.sh@862 -- # return 0 00:29:23.425 17:08:12 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:23.425 I/O targets: 00:29:23.425 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:29:23.425 00:29:23.425 00:29:23.425 CUnit - A unit testing framework for C - Version 2.1-3 00:29:23.425 http://cunit.sourceforge.net/ 00:29:23.425 00:29:23.425 00:29:23.425 Suite: bdevio tests on: Nvme0n1 00:29:23.425 Test: blockdev write read block ...passed 00:29:23.425 Test: blockdev write zeroes read block ...passed 00:29:23.425 Test: blockdev write zeroes read no split ...passed 00:29:23.425 Test: blockdev write zeroes read split ...passed 00:29:23.425 Test: blockdev write zeroes read split partial ...passed 00:29:23.425 Test: blockdev reset ...[2024-11-05 17:08:12.319016] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:23.425 [2024-11-05 17:08:12.322740] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:23.425 passed 00:29:23.425 Test: blockdev write read 8 blocks ...passed 00:29:23.425 Test: blockdev write read size > 128k ...passed 00:29:23.425 Test: blockdev write read invalid size ...passed 00:29:23.683 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:23.683 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:23.683 Test: blockdev write read max offset ...passed 00:29:23.683 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:23.683 Test: blockdev writev readv 8 blocks ...passed 00:29:23.683 Test: blockdev writev readv 30 x 1block ...passed 00:29:23.683 Test: blockdev writev readv block ...passed 00:29:23.683 Test: blockdev writev readv size > 128k ...passed 00:29:23.683 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:23.683 Test: blockdev comparev and writev ...[2024-11-05 17:08:12.333095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x420d000 len:0x1000 00:29:23.683 [2024-11-05 17:08:12.333213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:23.683 passed 00:29:23.683 Test: blockdev nvme passthru rw ...passed 00:29:23.683 Test: blockdev nvme passthru vendor specific ...[2024-11-05 17:08:12.334415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:23.683 passed 00:29:23.683 Test: blockdev nvme admin passthru ...[2024-11-05 17:08:12.334471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:23.683 passed 00:29:23.683 Test: blockdev copy ...passed 00:29:23.683 00:29:23.683 Run Summary: Type Total Ran Passed Failed Inactive 00:29:23.683 suites 1 1 n/a 0 0 00:29:23.683 tests 23 23 23 0 0 00:29:23.683 asserts 152 152 152 0 n/a 00:29:23.683 00:29:23.683 Elapsed time = 0.181 seconds 00:29:23.683 0 00:29:23.683 17:08:12 -- bdev/blockdev.sh@293 -- # killprocess 136744 00:29:23.683 17:08:12 -- common/autotest_common.sh@936 -- # '[' -z 136744 ']' 00:29:23.683 17:08:12 -- common/autotest_common.sh@940 -- # kill -0 136744 00:29:23.683 17:08:12 -- common/autotest_common.sh@941 -- # uname 00:29:23.683 17:08:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:23.683 17:08:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136744 00:29:23.683 17:08:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:23.683 killing process with pid 136744 00:29:23.684 17:08:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:23.684 17:08:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136744' 00:29:23.684 17:08:12 -- common/autotest_common.sh@955 -- # kill 136744 00:29:23.684 17:08:12 -- common/autotest_common.sh@960 -- # wait 136744 00:29:24.620 17:08:13 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:29:24.620 00:29:24.620 real 0m2.094s 00:29:24.620 user 0m4.882s 00:29:24.620 sys 0m0.370s 00:29:24.620 17:08:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:24.620 ************************************ 00:29:24.620 END TEST bdev_bounds 00:29:24.620 ************************************ 00:29:24.620 17:08:13 -- common/autotest_common.sh@10 -- # set +x 00:29:24.620 17:08:13 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:24.620 17:08:13 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:29:24.620 17:08:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:24.620 17:08:13 -- common/autotest_common.sh@10 -- # set +x 00:29:24.620 ************************************ 00:29:24.620 START TEST bdev_nbd 00:29:24.620 ************************************ 00:29:24.620 17:08:13 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:24.620 17:08:13 -- bdev/blockdev.sh@298 -- # uname -s 00:29:24.620 17:08:13 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:29:24.620 17:08:13 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:24.620 17:08:13 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:24.620 17:08:13 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1') 00:29:24.620 17:08:13 -- bdev/blockdev.sh@302 -- # local bdev_all 00:29:24.620 17:08:13 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:29:24.620 17:08:13 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:29:24.620 17:08:13 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:24.620 17:08:13 -- bdev/blockdev.sh@309 -- # local nbd_all 00:29:24.620 17:08:13 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:29:24.620 17:08:13 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:29:24.620 17:08:13 -- bdev/blockdev.sh@312 -- # local nbd_list 00:29:24.620 17:08:13 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1') 00:29:24.620 17:08:13 -- bdev/blockdev.sh@313 -- # local bdev_list 00:29:24.620 17:08:13 -- bdev/blockdev.sh@316 -- # nbd_pid=136807 00:29:24.620 17:08:13 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:24.620 17:08:13 -- bdev/blockdev.sh@318 -- # waitforlisten 136807 /var/tmp/spdk-nbd.sock 00:29:24.620 17:08:13 -- common/autotest_common.sh@829 -- # '[' -z 136807 ']' 00:29:24.620 17:08:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:24.620 17:08:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:24.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:24.620 17:08:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:24.620 17:08:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:24.620 17:08:13 -- common/autotest_common.sh@10 -- # set +x 00:29:24.620 17:08:13 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:24.620 [2024-11-05 17:08:13.425736] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:24.620 [2024-11-05 17:08:13.426210] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.878 [2024-11-05 17:08:13.597323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.878 [2024-11-05 17:08:13.755402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.445 17:08:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:25.445 17:08:14 -- common/autotest_common.sh@862 -- # return 0 00:29:25.445 17:08:14 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:29:25.445 17:08:14 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:25.445 17:08:14 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:29:25.445 17:08:14 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:25.445 17:08:14 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:29:25.445 17:08:14 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:25.445 17:08:14 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:29:25.445 17:08:14 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:25.445 17:08:14 -- bdev/nbd_common.sh@24 -- # local i 00:29:25.445 17:08:14 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:25.445 17:08:14 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:25.445 17:08:14 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:25.445 17:08:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:29:25.703 17:08:14 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:25.703 17:08:14 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:25.703 17:08:14 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:25.703 17:08:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:25.703 17:08:14 -- common/autotest_common.sh@867 -- # local i 00:29:25.703 17:08:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:25.703 17:08:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:25.703 17:08:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:25.703 17:08:14 -- common/autotest_common.sh@871 -- # break 00:29:25.703 17:08:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:25.703 17:08:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:25.703 17:08:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:25.703 1+0 records in 00:29:25.703 1+0 records out 00:29:25.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437187 s, 9.4 MB/s 00:29:25.703 17:08:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:25.703 17:08:14 -- common/autotest_common.sh@884 -- # size=4096 00:29:25.703 17:08:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:25.703 17:08:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:25.703 17:08:14 -- common/autotest_common.sh@887 -- # return 0 00:29:25.703 17:08:14 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:25.703 17:08:14 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:25.703 17:08:14 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:25.961 17:08:14 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:25.961 { 00:29:25.961 "nbd_device": "/dev/nbd0", 00:29:25.961 "bdev_name": "Nvme0n1" 00:29:25.961 } 00:29:25.961 ]' 00:29:25.961 17:08:14 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:25.961 17:08:14 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:25.961 17:08:14 -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:25.961 { 00:29:25.961 "nbd_device": "/dev/nbd0", 00:29:25.961 "bdev_name": "Nvme0n1" 00:29:25.961 } 00:29:25.961 ]' 00:29:25.961 17:08:14 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:25.961 17:08:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:25.961 17:08:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:25.961 17:08:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:25.961 17:08:14 -- bdev/nbd_common.sh@51 -- # local i 00:29:25.961 17:08:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:25.961 17:08:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:26.219 17:08:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:26.220 17:08:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:26.220 17:08:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:26.220 17:08:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:26.220 17:08:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:26.220 17:08:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:26.220 17:08:15 -- bdev/nbd_common.sh@41 -- # break 00:29:26.220 17:08:15 -- bdev/nbd_common.sh@45 -- # return 0 00:29:26.220 17:08:15 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:26.220 17:08:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:26.220 17:08:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@65 -- # true 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@65 -- # count=0 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@122 -- # count=0 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@127 -- # return 0 00:29:26.478 17:08:15 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@12 -- # local i 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:26.478 17:08:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:29:26.737 /dev/nbd0 00:29:26.737 17:08:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:26.737 17:08:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:26.737 17:08:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:26.737 17:08:15 -- common/autotest_common.sh@867 -- # local i 00:29:26.737 17:08:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:26.737 17:08:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:26.737 17:08:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:26.737 17:08:15 -- common/autotest_common.sh@871 -- # break 00:29:26.737 17:08:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:26.737 17:08:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:26.737 17:08:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:26.737 1+0 records in 00:29:26.737 1+0 records out 00:29:26.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470431 s, 8.7 MB/s 00:29:26.737 17:08:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:26.737 17:08:15 -- common/autotest_common.sh@884 -- # size=4096 00:29:26.737 17:08:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:26.737 17:08:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:26.737 17:08:15 -- common/autotest_common.sh@887 -- # return 0 00:29:26.737 17:08:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:26.737 17:08:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:26.737 17:08:15 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:26.737 17:08:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:26.737 17:08:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:26.996 17:08:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:26.996 { 00:29:26.996 "nbd_device": "/dev/nbd0", 00:29:26.996 "bdev_name": "Nvme0n1" 00:29:26.996 } 00:29:26.996 ]' 00:29:26.996 17:08:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:26.996 { 00:29:26.996 "nbd_device": "/dev/nbd0", 00:29:26.996 "bdev_name": "Nvme0n1" 00:29:26.996 } 00:29:26.996 ]' 00:29:26.996 17:08:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:26.996 17:08:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:27.258 17:08:15 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:27.258 17:08:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:27.258 17:08:15 -- bdev/nbd_common.sh@65 -- # count=1 00:29:27.258 17:08:15 -- bdev/nbd_common.sh@66 -- # echo 1 00:29:27.258 17:08:15 -- bdev/nbd_common.sh@95 -- # count=1 00:29:27.258 17:08:15 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:29:27.258 17:08:15 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:29:27.258 17:08:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:27.258 17:08:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:27.258 17:08:15 -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:27.258 17:08:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:27.258 17:08:15 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:27.258 17:08:15 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:27.258 256+0 records in 00:29:27.258 256+0 records out 00:29:27.258 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108277 s, 96.8 MB/s 00:29:27.258 17:08:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:27.258 17:08:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:27.258 256+0 records in 00:29:27.258 256+0 records out 00:29:27.258 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.094437 s, 11.1 MB/s 00:29:27.258 17:08:16 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:29:27.258 17:08:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:27.258 17:08:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:27.258 17:08:16 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:27.258 17:08:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:27.258 17:08:16 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:27.258 17:08:16 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:27.258 17:08:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:27.258 17:08:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:27.258 17:08:16 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:27.258 17:08:16 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:27.258 17:08:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:27.258 17:08:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:27.258 17:08:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:27.258 17:08:16 -- bdev/nbd_common.sh@51 -- # local i 00:29:27.258 17:08:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:27.258 17:08:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:27.538 17:08:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:27.538 17:08:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:27.538 17:08:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:27.538 17:08:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:27.538 17:08:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:27.538 17:08:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:27.538 17:08:16 -- bdev/nbd_common.sh@41 -- # break 00:29:27.538 17:08:16 -- bdev/nbd_common.sh@45 -- # return 0 00:29:27.538 17:08:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:27.538 17:08:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:27.538 17:08:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:27.808 17:08:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:27.808 17:08:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:27.808 17:08:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:27.808 17:08:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:27.808 17:08:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:27.808 17:08:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:27.808 17:08:16 -- bdev/nbd_common.sh@65 -- # true 00:29:27.808 17:08:16 -- bdev/nbd_common.sh@65 -- # count=0 00:29:27.808 17:08:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:27.808 17:08:16 -- bdev/nbd_common.sh@104 -- # count=0 00:29:27.808 17:08:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:27.808 17:08:16 -- bdev/nbd_common.sh@109 -- # return 0 00:29:27.808 17:08:16 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:27.808 17:08:16 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:27.808 17:08:16 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:29:27.808 17:08:16 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:29:27.808 17:08:16 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:29:27.808 17:08:16 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:28.066 malloc_lvol_verify 00:29:28.066 17:08:16 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:28.325 4bb0d2cf-b42b-4558-a54f-0fc7fccbc47c 00:29:28.325 17:08:17 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:28.584 54844484-5d4b-4857-9a14-89dc6a74a28f 00:29:28.584 17:08:17 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:28.843 /dev/nbd0 00:29:28.843 17:08:17 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:29:28.843 mke2fs 1.46.5 (30-Dec-2021) 00:29:28.843 00:29:28.843 Filesystem too small for a journal 00:29:28.843 Discarding device blocks: 0/1024 done 00:29:28.843 Creating filesystem with 1024 4k blocks and 1024 inodes 00:29:28.843 00:29:28.843 Allocating group tables: 0/1 done 00:29:28.843 Writing inode tables: 0/1 done 00:29:28.843 Writing superblocks and filesystem accounting information: 0/1 done 00:29:28.843 00:29:28.843 17:08:17 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:29:28.843 17:08:17 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:28.843 17:08:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:28.843 17:08:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:28.843 17:08:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:28.843 17:08:17 -- bdev/nbd_common.sh@51 -- # local i 00:29:28.843 17:08:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:28.843 17:08:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:29.102 17:08:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:29.102 17:08:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:29.102 17:08:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:29.102 17:08:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:29.102 17:08:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:29.102 17:08:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:29.102 17:08:17 -- bdev/nbd_common.sh@41 -- # break 00:29:29.102 17:08:17 -- bdev/nbd_common.sh@45 -- # return 0 00:29:29.102 17:08:17 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:29:29.102 17:08:17 -- bdev/nbd_common.sh@147 -- # return 0 00:29:29.102 17:08:17 -- bdev/blockdev.sh@324 -- # killprocess 136807 00:29:29.102 17:08:17 -- common/autotest_common.sh@936 -- # '[' -z 136807 ']' 00:29:29.102 17:08:17 -- common/autotest_common.sh@940 -- # kill -0 136807 00:29:29.102 17:08:17 -- common/autotest_common.sh@941 -- # uname 00:29:29.102 17:08:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:29.102 17:08:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136807 00:29:29.102 17:08:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:29.102 17:08:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:29.102 killing process with pid 136807 00:29:29.102 17:08:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136807' 00:29:29.102 17:08:17 -- common/autotest_common.sh@955 -- # kill 136807 00:29:29.102 17:08:17 -- common/autotest_common.sh@960 -- # wait 136807 00:29:30.039 17:08:18 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:29:30.039 00:29:30.039 real 0m5.552s 00:29:30.039 user 0m8.088s 00:29:30.039 sys 0m1.105s 00:29:30.039 17:08:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:30.039 ************************************ 00:29:30.039 END TEST bdev_nbd 00:29:30.039 17:08:18 -- common/autotest_common.sh@10 -- # set +x 00:29:30.039 ************************************ 00:29:30.297 17:08:18 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:29:30.297 17:08:18 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:29:30.297 skipping fio tests on NVMe due to multi-ns failures. 00:29:30.297 17:08:18 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:29:30.297 17:08:18 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:30.297 17:08:18 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:30.297 17:08:18 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:29:30.297 17:08:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:30.297 17:08:18 -- common/autotest_common.sh@10 -- # set +x 00:29:30.297 ************************************ 00:29:30.297 START TEST bdev_verify 00:29:30.297 ************************************ 00:29:30.297 17:08:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:30.297 [2024-11-05 17:08:19.029517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:30.297 [2024-11-05 17:08:19.029923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136995 ] 00:29:30.556 [2024-11-05 17:08:19.208391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:30.556 [2024-11-05 17:08:19.437459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.556 [2024-11-05 17:08:19.437453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.122 Running I/O for 5 seconds... 00:29:36.392 00:29:36.393 Latency(us) 00:29:36.393 [2024-11-05T17:08:25.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.393 [2024-11-05T17:08:25.270Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:36.393 Verification LBA range: start 0x0 length 0xa0000 00:29:36.393 Nvme0n1 : 5.01 13414.89 52.40 0.00 0.00 9502.87 539.93 13762.56 00:29:36.393 [2024-11-05T17:08:25.270Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:36.393 Verification LBA range: start 0xa0000 length 0xa0000 00:29:36.393 Nvme0n1 : 5.01 13539.50 52.89 0.00 0.00 9416.50 726.11 18469.24 00:29:36.393 [2024-11-05T17:08:25.270Z] =================================================================================================================== 00:29:36.393 [2024-11-05T17:08:25.270Z] Total : 26954.39 105.29 0.00 0.00 9459.48 539.93 18469.24 00:29:42.956 ************************************ 00:29:42.956 END TEST bdev_verify 00:29:42.956 ************************************ 00:29:42.956 00:29:42.956 real 0m12.892s 00:29:42.956 user 0m24.529s 00:29:42.956 sys 0m0.349s 00:29:42.956 17:08:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:42.956 17:08:31 -- common/autotest_common.sh@10 -- # set +x 00:29:43.215 17:08:31 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:43.215 17:08:31 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:29:43.215 17:08:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:43.215 17:08:31 -- common/autotest_common.sh@10 -- # set +x 00:29:43.215 ************************************ 00:29:43.215 START TEST bdev_verify_big_io 00:29:43.215 ************************************ 00:29:43.215 17:08:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:43.215 [2024-11-05 17:08:31.987965] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:43.215 [2024-11-05 17:08:31.988417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137142 ] 00:29:43.474 [2024-11-05 17:08:32.161095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:43.474 [2024-11-05 17:08:32.337135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.474 [2024-11-05 17:08:32.337146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.041 Running I/O for 5 seconds... 00:29:49.309 00:29:49.309 Latency(us) 00:29:49.309 [2024-11-05T17:08:38.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.309 [2024-11-05T17:08:38.186Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:49.309 Verification LBA range: start 0x0 length 0xa000 00:29:49.309 Nvme0n1 : 5.04 2198.06 137.38 0.00 0.00 57495.98 774.52 89605.59 00:29:49.309 [2024-11-05T17:08:38.186Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:49.309 Verification LBA range: start 0xa000 length 0xa000 00:29:49.309 Nvme0n1 : 5.03 2560.57 160.04 0.00 0.00 49387.57 677.70 70063.94 00:29:49.309 [2024-11-05T17:08:38.186Z] =================================================================================================================== 00:29:49.309 [2024-11-05T17:08:38.186Z] Total : 4758.63 297.41 0.00 0.00 53135.12 677.70 89605.59 00:29:50.245 ************************************ 00:29:50.245 END TEST bdev_verify_big_io 00:29:50.245 ************************************ 00:29:50.245 00:29:50.245 real 0m7.076s 00:29:50.245 user 0m13.042s 00:29:50.245 sys 0m0.261s 00:29:50.245 17:08:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:50.245 17:08:38 -- common/autotest_common.sh@10 -- # set +x 00:29:50.245 17:08:39 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:50.245 17:08:39 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:29:50.245 17:08:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:50.245 17:08:39 -- common/autotest_common.sh@10 -- # set +x 00:29:50.245 ************************************ 00:29:50.245 START TEST bdev_write_zeroes 00:29:50.245 ************************************ 00:29:50.245 17:08:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:50.245 [2024-11-05 17:08:39.115155] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:50.245 [2024-11-05 17:08:39.115928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137249 ] 00:29:50.503 [2024-11-05 17:08:39.288871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.762 [2024-11-05 17:08:39.449669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.019 Running I/O for 1 seconds... 00:29:51.985 00:29:51.985 Latency(us) 00:29:51.985 [2024-11-05T17:08:40.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.985 [2024-11-05T17:08:40.862Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:51.985 Nvme0n1 : 1.00 69055.08 269.75 0.00 0.00 1849.00 543.65 12213.53 00:29:51.985 [2024-11-05T17:08:40.862Z] =================================================================================================================== 00:29:51.985 [2024-11-05T17:08:40.862Z] Total : 69055.08 269.75 0.00 0.00 1849.00 543.65 12213.53 00:29:52.920 00:29:52.920 real 0m2.721s 00:29:52.920 user 0m2.388s 00:29:52.920 sys 0m0.232s 00:29:52.920 17:08:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:52.920 ************************************ 00:29:52.920 END TEST bdev_write_zeroes 00:29:52.920 ************************************ 00:29:52.920 17:08:41 -- common/autotest_common.sh@10 -- # set +x 00:29:52.920 17:08:41 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:52.920 17:08:41 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:29:52.920 17:08:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:52.920 17:08:41 -- common/autotest_common.sh@10 -- # set +x 00:29:52.920 ************************************ 00:29:52.920 START TEST bdev_json_nonenclosed 00:29:52.920 ************************************ 00:29:52.920 17:08:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:53.194 [2024-11-05 17:08:41.861730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:53.194 [2024-11-05 17:08:41.862031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137307 ] 00:29:53.194 [2024-11-05 17:08:42.015312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.457 [2024-11-05 17:08:42.171771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.457 [2024-11-05 17:08:42.172298] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:53.457 [2024-11-05 17:08:42.172450] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:53.715 ************************************ 00:29:53.715 END TEST bdev_json_nonenclosed 00:29:53.715 ************************************ 00:29:53.715 00:29:53.715 real 0m0.672s 00:29:53.715 user 0m0.455s 00:29:53.715 sys 0m0.117s 00:29:53.715 17:08:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:53.715 17:08:42 -- common/autotest_common.sh@10 -- # set +x 00:29:53.715 17:08:42 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:53.715 17:08:42 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:29:53.715 17:08:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:53.715 17:08:42 -- common/autotest_common.sh@10 -- # set +x 00:29:53.715 ************************************ 00:29:53.715 START TEST bdev_json_nonarray 00:29:53.715 ************************************ 00:29:53.715 17:08:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:53.715 [2024-11-05 17:08:42.582524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:53.715 [2024-11-05 17:08:42.582987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137345 ] 00:29:53.974 [2024-11-05 17:08:42.736132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.232 [2024-11-05 17:08:42.893473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.232 [2024-11-05 17:08:42.893990] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:54.232 [2024-11-05 17:08:42.894153] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:54.490 ************************************ 00:29:54.490 END TEST bdev_json_nonarray 00:29:54.490 ************************************ 00:29:54.490 00:29:54.490 real 0m0.664s 00:29:54.490 user 0m0.459s 00:29:54.490 sys 0m0.104s 00:29:54.490 17:08:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:54.490 17:08:43 -- common/autotest_common.sh@10 -- # set +x 00:29:54.490 17:08:43 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:29:54.490 17:08:43 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:29:54.490 17:08:43 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:29:54.490 17:08:43 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:29:54.490 17:08:43 -- bdev/blockdev.sh@809 -- # cleanup 00:29:54.490 17:08:43 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:54.490 17:08:43 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:54.490 17:08:43 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:29:54.490 17:08:43 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:29:54.490 17:08:43 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:29:54.490 17:08:43 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:29:54.490 ************************************ 00:29:54.490 END TEST blockdev_nvme 00:29:54.490 ************************************ 00:29:54.490 00:29:54.490 real 0m37.721s 00:29:54.490 user 0m59.646s 00:29:54.490 sys 0m3.514s 00:29:54.490 17:08:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:54.490 17:08:43 -- common/autotest_common.sh@10 -- # set +x 00:29:54.490 17:08:43 -- spdk/autotest.sh@206 -- # uname -s 00:29:54.490 17:08:43 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:29:54.490 17:08:43 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:54.490 17:08:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:54.490 17:08:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:54.490 17:08:43 -- common/autotest_common.sh@10 -- # set +x 00:29:54.490 ************************************ 00:29:54.490 START TEST blockdev_nvme_gpt 00:29:54.490 ************************************ 00:29:54.490 17:08:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:54.490 * Looking for test storage... 00:29:54.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:54.490 17:08:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:54.490 17:08:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:54.490 17:08:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:54.749 17:08:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:54.749 17:08:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:54.749 17:08:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:54.749 17:08:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:54.749 17:08:43 -- scripts/common.sh@335 -- # IFS=.-: 00:29:54.749 17:08:43 -- scripts/common.sh@335 -- # read -ra ver1 00:29:54.749 17:08:43 -- scripts/common.sh@336 -- # IFS=.-: 00:29:54.749 17:08:43 -- scripts/common.sh@336 -- # read -ra ver2 00:29:54.749 17:08:43 -- scripts/common.sh@337 -- # local 'op=<' 00:29:54.749 17:08:43 -- scripts/common.sh@339 -- # ver1_l=2 00:29:54.749 17:08:43 -- scripts/common.sh@340 -- # ver2_l=1 00:29:54.749 17:08:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:54.749 17:08:43 -- scripts/common.sh@343 -- # case "$op" in 00:29:54.749 17:08:43 -- scripts/common.sh@344 -- # : 1 00:29:54.749 17:08:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:54.749 17:08:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:54.749 17:08:43 -- scripts/common.sh@364 -- # decimal 1 00:29:54.749 17:08:43 -- scripts/common.sh@352 -- # local d=1 00:29:54.749 17:08:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:54.749 17:08:43 -- scripts/common.sh@354 -- # echo 1 00:29:54.749 17:08:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:54.749 17:08:43 -- scripts/common.sh@365 -- # decimal 2 00:29:54.749 17:08:43 -- scripts/common.sh@352 -- # local d=2 00:29:54.749 17:08:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:54.749 17:08:43 -- scripts/common.sh@354 -- # echo 2 00:29:54.749 17:08:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:54.749 17:08:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:54.749 17:08:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:54.749 17:08:43 -- scripts/common.sh@367 -- # return 0 00:29:54.749 17:08:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:54.749 17:08:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:54.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.749 --rc genhtml_branch_coverage=1 00:29:54.749 --rc genhtml_function_coverage=1 00:29:54.749 --rc genhtml_legend=1 00:29:54.749 --rc geninfo_all_blocks=1 00:29:54.749 --rc geninfo_unexecuted_blocks=1 00:29:54.749 00:29:54.749 ' 00:29:54.749 17:08:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:54.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.749 --rc genhtml_branch_coverage=1 00:29:54.749 --rc genhtml_function_coverage=1 00:29:54.749 --rc genhtml_legend=1 00:29:54.749 --rc geninfo_all_blocks=1 00:29:54.749 --rc geninfo_unexecuted_blocks=1 00:29:54.749 00:29:54.749 ' 00:29:54.749 17:08:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:54.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.749 --rc genhtml_branch_coverage=1 00:29:54.749 --rc genhtml_function_coverage=1 00:29:54.749 --rc genhtml_legend=1 00:29:54.749 --rc geninfo_all_blocks=1 00:29:54.749 --rc geninfo_unexecuted_blocks=1 00:29:54.749 00:29:54.749 ' 00:29:54.749 17:08:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:54.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.749 --rc genhtml_branch_coverage=1 00:29:54.749 --rc genhtml_function_coverage=1 00:29:54.749 --rc genhtml_legend=1 00:29:54.749 --rc geninfo_all_blocks=1 00:29:54.749 --rc geninfo_unexecuted_blocks=1 00:29:54.749 00:29:54.749 ' 00:29:54.749 17:08:43 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:54.749 17:08:43 -- bdev/nbd_common.sh@6 -- # set -e 00:29:54.749 17:08:43 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:54.749 17:08:43 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:54.749 17:08:43 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:54.749 17:08:43 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:54.749 17:08:43 -- bdev/blockdev.sh@18 -- # : 00:29:54.749 17:08:43 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:29:54.749 17:08:43 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:29:54.749 17:08:43 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:29:54.749 17:08:43 -- bdev/blockdev.sh@672 -- # uname -s 00:29:54.749 17:08:43 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:29:54.749 17:08:43 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:29:54.749 17:08:43 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:29:54.749 17:08:43 -- bdev/blockdev.sh@681 -- # crypto_device= 00:29:54.749 17:08:43 -- bdev/blockdev.sh@682 -- # dek= 00:29:54.749 17:08:43 -- bdev/blockdev.sh@683 -- # env_ctx= 00:29:54.749 17:08:43 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:29:54.749 17:08:43 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:29:54.749 17:08:43 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:29:54.749 17:08:43 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:29:54.749 17:08:43 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:29:54.749 17:08:43 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=137420 00:29:54.749 17:08:43 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:54.749 17:08:43 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:54.749 17:08:43 -- bdev/blockdev.sh@47 -- # waitforlisten 137420 00:29:54.749 17:08:43 -- common/autotest_common.sh@829 -- # '[' -z 137420 ']' 00:29:54.749 17:08:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.749 17:08:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:54.749 17:08:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.749 17:08:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:54.749 17:08:43 -- common/autotest_common.sh@10 -- # set +x 00:29:54.749 [2024-11-05 17:08:43.505648] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:54.749 [2024-11-05 17:08:43.505842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137420 ] 00:29:55.008 [2024-11-05 17:08:43.664546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.008 [2024-11-05 17:08:43.830951] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:55.008 [2024-11-05 17:08:43.831174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.384 17:08:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:56.384 17:08:45 -- common/autotest_common.sh@862 -- # return 0 00:29:56.384 17:08:45 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:29:56.384 17:08:45 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:29:56.384 17:08:45 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:56.643 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:56.643 Waiting for block devices as requested 00:29:56.643 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:29:56.643 17:08:45 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:29:56.643 17:08:45 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:29:56.643 17:08:45 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:29:56.643 17:08:45 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:29:56.643 17:08:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:29:56.643 17:08:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:29:56.643 17:08:45 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:29:56.643 17:08:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:56.643 17:08:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:29:56.643 17:08:45 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme0/nvme0n1') 00:29:56.643 17:08:45 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:29:56.643 17:08:45 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:29:56.643 17:08:45 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:29:56.643 17:08:45 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:29:56.643 17:08:45 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:29:56.643 17:08:45 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:29:56.643 17:08:45 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:29:56.643 BYT; 00:29:56.643 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:29:56.643 17:08:45 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:29:56.643 BYT; 00:29:56.643 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:29:56.643 17:08:45 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:29:56.643 17:08:45 -- bdev/blockdev.sh@114 -- # break 00:29:56.643 17:08:45 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:29:56.643 17:08:45 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:29:56.643 17:08:45 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:29:56.643 17:08:45 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:29:57.209 17:08:45 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:29:57.209 17:08:45 -- scripts/common.sh@410 -- # local spdk_guid 00:29:57.209 17:08:45 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:57.209 17:08:45 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:57.209 17:08:45 -- scripts/common.sh@415 -- # IFS='()' 00:29:57.209 17:08:45 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:29:57.209 17:08:45 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:57.209 17:08:45 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:29:57.209 17:08:45 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:57.209 17:08:45 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:57.209 17:08:45 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:57.209 17:08:45 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:29:57.209 17:08:45 -- scripts/common.sh@422 -- # local spdk_guid 00:29:57.209 17:08:45 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:57.209 17:08:45 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:57.209 17:08:45 -- scripts/common.sh@427 -- # IFS='()' 00:29:57.209 17:08:45 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:29:57.209 17:08:45 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:57.209 17:08:45 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:29:57.209 17:08:45 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:57.209 17:08:45 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:57.209 17:08:45 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:57.209 17:08:45 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:29:58.143 The operation has completed successfully. 00:29:58.143 17:08:46 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:29:59.079 The operation has completed successfully. 00:29:59.079 17:08:47 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:59.646 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:59.646 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:30:00.602 17:08:49 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:30:00.602 17:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.602 17:08:49 -- common/autotest_common.sh@10 -- # set +x 00:30:00.602 [] 00:30:00.602 17:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.602 17:08:49 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:30:00.602 17:08:49 -- bdev/blockdev.sh@79 -- # local json 00:30:00.602 17:08:49 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:30:00.602 17:08:49 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:00.602 17:08:49 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:30:00.602 17:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.602 17:08:49 -- common/autotest_common.sh@10 -- # set +x 00:30:00.602 17:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.602 17:08:49 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:30:00.602 17:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.602 17:08:49 -- common/autotest_common.sh@10 -- # set +x 00:30:00.602 17:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.602 17:08:49 -- bdev/blockdev.sh@738 -- # cat 00:30:00.602 17:08:49 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:30:00.602 17:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.602 17:08:49 -- common/autotest_common.sh@10 -- # set +x 00:30:00.602 17:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.602 17:08:49 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:30:00.602 17:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.602 17:08:49 -- common/autotest_common.sh@10 -- # set +x 00:30:00.602 17:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.602 17:08:49 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:30:00.602 17:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.602 17:08:49 -- common/autotest_common.sh@10 -- # set +x 00:30:00.602 17:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.602 17:08:49 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:30:00.603 17:08:49 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:30:00.603 17:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.603 17:08:49 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:30:00.603 17:08:49 -- common/autotest_common.sh@10 -- # set +x 00:30:00.603 17:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.861 17:08:49 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:30:00.861 17:08:49 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:30:00.861 17:08:49 -- bdev/blockdev.sh@747 -- # jq -r .name 00:30:00.861 17:08:49 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:30:00.861 17:08:49 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:30:00.861 17:08:49 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:30:00.861 17:08:49 -- bdev/blockdev.sh@752 -- # killprocess 137420 00:30:00.861 17:08:49 -- common/autotest_common.sh@936 -- # '[' -z 137420 ']' 00:30:00.861 17:08:49 -- common/autotest_common.sh@940 -- # kill -0 137420 00:30:00.861 17:08:49 -- common/autotest_common.sh@941 -- # uname 00:30:00.861 17:08:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:00.861 17:08:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137420 00:30:00.861 17:08:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:00.861 killing process with pid 137420 00:30:00.861 17:08:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:00.861 17:08:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137420' 00:30:00.861 17:08:49 -- common/autotest_common.sh@955 -- # kill 137420 00:30:00.861 17:08:49 -- common/autotest_common.sh@960 -- # wait 137420 00:30:02.761 17:08:51 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:02.761 17:08:51 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:30:02.761 17:08:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:30:02.761 17:08:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:02.761 17:08:51 -- common/autotest_common.sh@10 -- # set +x 00:30:02.761 ************************************ 00:30:02.761 START TEST bdev_hello_world 00:30:02.761 ************************************ 00:30:02.761 17:08:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:30:02.761 [2024-11-05 17:08:51.403470] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:02.761 [2024-11-05 17:08:51.403657] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137863 ] 00:30:02.761 [2024-11-05 17:08:51.569697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.020 [2024-11-05 17:08:51.738472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.278 [2024-11-05 17:08:52.115972] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:30:03.278 [2024-11-05 17:08:52.116058] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:30:03.278 [2024-11-05 17:08:52.116109] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:30:03.278 [2024-11-05 17:08:52.118707] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:30:03.278 [2024-11-05 17:08:52.119299] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:30:03.278 [2024-11-05 17:08:52.119354] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:30:03.278 [2024-11-05 17:08:52.119641] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:30:03.278 00:30:03.278 [2024-11-05 17:08:52.119696] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:30:04.212 00:30:04.212 real 0m1.684s 00:30:04.212 user 0m1.340s 00:30:04.212 sys 0m0.245s 00:30:04.212 17:08:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:04.212 17:08:53 -- common/autotest_common.sh@10 -- # set +x 00:30:04.212 ************************************ 00:30:04.212 END TEST bdev_hello_world 00:30:04.212 ************************************ 00:30:04.212 17:08:53 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:30:04.212 17:08:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:04.212 17:08:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:04.212 17:08:53 -- common/autotest_common.sh@10 -- # set +x 00:30:04.212 ************************************ 00:30:04.212 START TEST bdev_bounds 00:30:04.212 ************************************ 00:30:04.212 17:08:53 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:30:04.212 17:08:53 -- bdev/blockdev.sh@288 -- # bdevio_pid=137908 00:30:04.212 17:08:53 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:30:04.212 17:08:53 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:04.212 17:08:53 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 137908' 00:30:04.212 Process bdevio pid: 137908 00:30:04.212 17:08:53 -- bdev/blockdev.sh@291 -- # waitforlisten 137908 00:30:04.212 17:08:53 -- common/autotest_common.sh@829 -- # '[' -z 137908 ']' 00:30:04.212 17:08:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.212 17:08:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:04.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.212 17:08:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.212 17:08:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:04.212 17:08:53 -- common/autotest_common.sh@10 -- # set +x 00:30:04.471 [2024-11-05 17:08:53.152280] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:04.471 [2024-11-05 17:08:53.152493] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137908 ] 00:30:04.471 [2024-11-05 17:08:53.329687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:04.728 [2024-11-05 17:08:53.491106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.728 [2024-11-05 17:08:53.491241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.728 [2024-11-05 17:08:53.491239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.294 17:08:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:05.294 17:08:54 -- common/autotest_common.sh@862 -- # return 0 00:30:05.294 17:08:54 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:05.294 I/O targets: 00:30:05.294 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:30:05.294 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:30:05.294 00:30:05.294 00:30:05.294 CUnit - A unit testing framework for C - Version 2.1-3 00:30:05.294 http://cunit.sourceforge.net/ 00:30:05.294 00:30:05.294 00:30:05.294 Suite: bdevio tests on: Nvme0n1p2 00:30:05.294 Test: blockdev write read block ...passed 00:30:05.294 Test: blockdev write zeroes read block ...passed 00:30:05.294 Test: blockdev write zeroes read no split ...passed 00:30:05.295 Test: blockdev write zeroes read split ...passed 00:30:05.553 Test: blockdev write zeroes read split partial ...passed 00:30:05.553 Test: blockdev reset ...[2024-11-05 17:08:54.210218] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:05.553 [2024-11-05 17:08:54.213601] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:05.553 passed 00:30:05.553 Test: blockdev write read 8 blocks ...passed 00:30:05.553 Test: blockdev write read size > 128k ...passed 00:30:05.553 Test: blockdev write read invalid size ...passed 00:30:05.553 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:05.553 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:05.553 Test: blockdev write read max offset ...passed 00:30:05.553 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:05.553 Test: blockdev writev readv 8 blocks ...passed 00:30:05.553 Test: blockdev writev readv 30 x 1block ...passed 00:30:05.553 Test: blockdev writev readv block ...passed 00:30:05.553 Test: blockdev writev readv size > 128k ...passed 00:30:05.553 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:05.553 Test: blockdev comparev and writev ...[2024-11-05 17:08:54.223813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x3540b000 len:0x1000 00:30:05.553 [2024-11-05 17:08:54.224015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:05.553 passed 00:30:05.553 Test: blockdev nvme passthru rw ...passed 00:30:05.553 Test: blockdev nvme passthru vendor specific ...passed 00:30:05.553 Test: blockdev nvme admin passthru ...passed 00:30:05.553 Test: blockdev copy ...passed 00:30:05.553 Suite: bdevio tests on: Nvme0n1p1 00:30:05.553 Test: blockdev write read block ...passed 00:30:05.553 Test: blockdev write zeroes read block ...passed 00:30:05.553 Test: blockdev write zeroes read no split ...passed 00:30:05.553 Test: blockdev write zeroes read split ...passed 00:30:05.553 Test: blockdev write zeroes read split partial ...passed 00:30:05.553 Test: blockdev reset ...[2024-11-05 17:08:54.270109] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:05.553 [2024-11-05 17:08:54.272996] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:05.553 passed 00:30:05.553 Test: blockdev write read 8 blocks ...passed 00:30:05.553 Test: blockdev write read size > 128k ...passed 00:30:05.553 Test: blockdev write read invalid size ...passed 00:30:05.553 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:05.553 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:05.553 Test: blockdev write read max offset ...passed 00:30:05.553 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:05.553 Test: blockdev writev readv 8 blocks ...passed 00:30:05.553 Test: blockdev writev readv 30 x 1block ...passed 00:30:05.553 Test: blockdev writev readv block ...passed 00:30:05.553 Test: blockdev writev readv size > 128k ...passed 00:30:05.553 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:05.553 Test: blockdev comparev and writev ...[2024-11-05 17:08:54.282659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x3540d000 len:0x1000 00:30:05.553 [2024-11-05 17:08:54.282879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:05.553 passed 00:30:05.553 Test: blockdev nvme passthru rw ...passed 00:30:05.553 Test: blockdev nvme passthru vendor specific ...passed 00:30:05.553 Test: blockdev nvme admin passthru ...passed 00:30:05.553 Test: blockdev copy ...passed 00:30:05.553 00:30:05.553 Run Summary: Type Total Ran Passed Failed Inactive 00:30:05.553 suites 2 2 n/a 0 0 00:30:05.553 tests 46 46 46 0 0 00:30:05.553 asserts 284 284 284 0 n/a 00:30:05.553 00:30:05.554 Elapsed time = 0.331 seconds 00:30:05.554 0 00:30:05.554 17:08:54 -- bdev/blockdev.sh@293 -- # killprocess 137908 00:30:05.554 17:08:54 -- common/autotest_common.sh@936 -- # '[' -z 137908 ']' 00:30:05.554 17:08:54 -- common/autotest_common.sh@940 -- # kill -0 137908 00:30:05.554 17:08:54 -- common/autotest_common.sh@941 -- # uname 00:30:05.554 17:08:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:05.554 17:08:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137908 00:30:05.554 17:08:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:05.554 17:08:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:05.554 17:08:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137908' 00:30:05.554 killing process with pid 137908 00:30:05.554 17:08:54 -- common/autotest_common.sh@955 -- # kill 137908 00:30:05.554 17:08:54 -- common/autotest_common.sh@960 -- # wait 137908 00:30:06.489 17:08:55 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:30:06.489 00:30:06.489 real 0m2.073s 00:30:06.489 user 0m4.880s 00:30:06.489 sys 0m0.322s 00:30:06.489 17:08:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:06.489 17:08:55 -- common/autotest_common.sh@10 -- # set +x 00:30:06.489 ************************************ 00:30:06.489 END TEST bdev_bounds 00:30:06.489 ************************************ 00:30:06.489 17:08:55 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:30:06.489 17:08:55 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:30:06.489 17:08:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:06.489 17:08:55 -- common/autotest_common.sh@10 -- # set +x 00:30:06.489 ************************************ 00:30:06.489 START TEST bdev_nbd 00:30:06.489 ************************************ 00:30:06.489 17:08:55 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:30:06.489 17:08:55 -- bdev/blockdev.sh@298 -- # uname -s 00:30:06.489 17:08:55 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:30:06.489 17:08:55 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:06.489 17:08:55 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:06.489 17:08:55 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:30:06.489 17:08:55 -- bdev/blockdev.sh@302 -- # local bdev_all 00:30:06.490 17:08:55 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:30:06.490 17:08:55 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:30:06.490 17:08:55 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:30:06.490 17:08:55 -- bdev/blockdev.sh@309 -- # local nbd_all 00:30:06.490 17:08:55 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:30:06.490 17:08:55 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:06.490 17:08:55 -- bdev/blockdev.sh@312 -- # local nbd_list 00:30:06.490 17:08:55 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:30:06.490 17:08:55 -- bdev/blockdev.sh@313 -- # local bdev_list 00:30:06.490 17:08:55 -- bdev/blockdev.sh@316 -- # nbd_pid=137971 00:30:06.490 17:08:55 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:30:06.490 17:08:55 -- bdev/blockdev.sh@318 -- # waitforlisten 137971 /var/tmp/spdk-nbd.sock 00:30:06.490 17:08:55 -- common/autotest_common.sh@829 -- # '[' -z 137971 ']' 00:30:06.490 17:08:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:06.490 17:08:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:06.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:06.490 17:08:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:06.490 17:08:55 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:06.490 17:08:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:06.490 17:08:55 -- common/autotest_common.sh@10 -- # set +x 00:30:06.490 [2024-11-05 17:08:55.265101] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:06.490 [2024-11-05 17:08:55.265501] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.748 [2024-11-05 17:08:55.412639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.748 [2024-11-05 17:08:55.575593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.314 17:08:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:07.314 17:08:56 -- common/autotest_common.sh@862 -- # return 0 00:30:07.314 17:08:56 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:30:07.314 17:08:56 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:07.314 17:08:56 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:30:07.314 17:08:56 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:30:07.314 17:08:56 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:30:07.314 17:08:56 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:07.314 17:08:56 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:30:07.314 17:08:56 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:30:07.314 17:08:56 -- bdev/nbd_common.sh@24 -- # local i 00:30:07.314 17:08:56 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:30:07.314 17:08:56 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:30:07.314 17:08:56 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:07.314 17:08:56 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:30:07.572 17:08:56 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:30:07.572 17:08:56 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:30:07.572 17:08:56 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:30:07.572 17:08:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:07.572 17:08:56 -- common/autotest_common.sh@867 -- # local i 00:30:07.572 17:08:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:07.572 17:08:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:07.572 17:08:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:07.572 17:08:56 -- common/autotest_common.sh@871 -- # break 00:30:07.572 17:08:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:07.572 17:08:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:07.572 17:08:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:07.572 1+0 records in 00:30:07.572 1+0 records out 00:30:07.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536112 s, 7.6 MB/s 00:30:07.572 17:08:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:07.572 17:08:56 -- common/autotest_common.sh@884 -- # size=4096 00:30:07.572 17:08:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:07.572 17:08:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:07.572 17:08:56 -- common/autotest_common.sh@887 -- # return 0 00:30:07.572 17:08:56 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:07.572 17:08:56 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:07.572 17:08:56 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:30:07.846 17:08:56 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:30:07.846 17:08:56 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:30:07.846 17:08:56 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:30:07.846 17:08:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:07.846 17:08:56 -- common/autotest_common.sh@867 -- # local i 00:30:07.846 17:08:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:07.846 17:08:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:07.846 17:08:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:07.846 17:08:56 -- common/autotest_common.sh@871 -- # break 00:30:07.846 17:08:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:07.846 17:08:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:07.846 17:08:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:07.846 1+0 records in 00:30:07.846 1+0 records out 00:30:07.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559537 s, 7.3 MB/s 00:30:07.846 17:08:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:07.846 17:08:56 -- common/autotest_common.sh@884 -- # size=4096 00:30:07.846 17:08:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:08.117 17:08:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:08.117 17:08:56 -- common/autotest_common.sh@887 -- # return 0 00:30:08.117 17:08:56 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:08.117 17:08:56 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:08.117 17:08:56 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:08.117 17:08:56 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:30:08.117 { 00:30:08.117 "nbd_device": "/dev/nbd0", 00:30:08.117 "bdev_name": "Nvme0n1p1" 00:30:08.117 }, 00:30:08.117 { 00:30:08.117 "nbd_device": "/dev/nbd1", 00:30:08.117 "bdev_name": "Nvme0n1p2" 00:30:08.118 } 00:30:08.118 ]' 00:30:08.118 17:08:56 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:30:08.118 17:08:56 -- bdev/nbd_common.sh@119 -- # echo '[ 00:30:08.118 { 00:30:08.118 "nbd_device": "/dev/nbd0", 00:30:08.118 "bdev_name": "Nvme0n1p1" 00:30:08.118 }, 00:30:08.118 { 00:30:08.118 "nbd_device": "/dev/nbd1", 00:30:08.118 "bdev_name": "Nvme0n1p2" 00:30:08.118 } 00:30:08.118 ]' 00:30:08.118 17:08:56 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:30:08.375 17:08:57 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:08.376 17:08:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:08.376 17:08:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:08.376 17:08:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:08.376 17:08:57 -- bdev/nbd_common.sh@51 -- # local i 00:30:08.376 17:08:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:08.376 17:08:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:08.376 17:08:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:08.376 17:08:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:08.376 17:08:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:08.376 17:08:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:08.376 17:08:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:08.376 17:08:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:08.376 17:08:57 -- bdev/nbd_common.sh@41 -- # break 00:30:08.376 17:08:57 -- bdev/nbd_common.sh@45 -- # return 0 00:30:08.376 17:08:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:08.376 17:08:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:08.634 17:08:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:08.634 17:08:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:08.634 17:08:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:08.634 17:08:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:08.634 17:08:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:08.634 17:08:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:08.634 17:08:57 -- bdev/nbd_common.sh@41 -- # break 00:30:08.634 17:08:57 -- bdev/nbd_common.sh@45 -- # return 0 00:30:08.634 17:08:57 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:08.634 17:08:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:08.634 17:08:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@65 -- # true 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@65 -- # count=0 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@122 -- # count=0 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@127 -- # return 0 00:30:08.893 17:08:57 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@12 -- # local i 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:08.893 17:08:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:30:09.151 /dev/nbd0 00:30:09.409 17:08:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:09.409 17:08:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:09.409 17:08:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:09.409 17:08:58 -- common/autotest_common.sh@867 -- # local i 00:30:09.409 17:08:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:09.409 17:08:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:09.409 17:08:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:09.409 17:08:58 -- common/autotest_common.sh@871 -- # break 00:30:09.409 17:08:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:09.409 17:08:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:09.409 17:08:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:09.409 1+0 records in 00:30:09.409 1+0 records out 00:30:09.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666341 s, 6.1 MB/s 00:30:09.409 17:08:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:09.409 17:08:58 -- common/autotest_common.sh@884 -- # size=4096 00:30:09.409 17:08:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:09.409 17:08:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:09.409 17:08:58 -- common/autotest_common.sh@887 -- # return 0 00:30:09.409 17:08:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:09.409 17:08:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:09.409 17:08:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:30:09.667 /dev/nbd1 00:30:09.667 17:08:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:09.667 17:08:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:09.667 17:08:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:09.667 17:08:58 -- common/autotest_common.sh@867 -- # local i 00:30:09.667 17:08:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:09.667 17:08:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:09.667 17:08:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:09.667 17:08:58 -- common/autotest_common.sh@871 -- # break 00:30:09.667 17:08:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:09.667 17:08:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:09.667 17:08:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:09.667 1+0 records in 00:30:09.667 1+0 records out 00:30:09.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620166 s, 6.6 MB/s 00:30:09.667 17:08:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:09.667 17:08:58 -- common/autotest_common.sh@884 -- # size=4096 00:30:09.667 17:08:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:09.667 17:08:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:09.667 17:08:58 -- common/autotest_common.sh@887 -- # return 0 00:30:09.667 17:08:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:09.667 17:08:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:09.667 17:08:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:09.668 17:08:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:09.668 17:08:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:09.668 17:08:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:09.668 { 00:30:09.668 "nbd_device": "/dev/nbd0", 00:30:09.668 "bdev_name": "Nvme0n1p1" 00:30:09.668 }, 00:30:09.668 { 00:30:09.668 "nbd_device": "/dev/nbd1", 00:30:09.668 "bdev_name": "Nvme0n1p2" 00:30:09.668 } 00:30:09.668 ]' 00:30:09.668 17:08:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:09.668 { 00:30:09.668 "nbd_device": "/dev/nbd0", 00:30:09.668 "bdev_name": "Nvme0n1p1" 00:30:09.668 }, 00:30:09.668 { 00:30:09.668 "nbd_device": "/dev/nbd1", 00:30:09.668 "bdev_name": "Nvme0n1p2" 00:30:09.668 } 00:30:09.668 ]' 00:30:09.668 17:08:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:09.926 /dev/nbd1' 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:09.926 /dev/nbd1' 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@65 -- # count=2 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@66 -- # echo 2 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@95 -- # count=2 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:30:09.926 256+0 records in 00:30:09.926 256+0 records out 00:30:09.926 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00854478 s, 123 MB/s 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:09.926 256+0 records in 00:30:09.926 256+0 records out 00:30:09.926 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0779785 s, 13.4 MB/s 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:09.926 256+0 records in 00:30:09.926 256+0 records out 00:30:09.926 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.083413 s, 12.6 MB/s 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:09.926 17:08:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:10.184 17:08:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:10.184 17:08:58 -- bdev/nbd_common.sh@51 -- # local i 00:30:10.184 17:08:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:10.184 17:08:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:10.442 17:08:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:10.442 17:08:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:10.442 17:08:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:10.442 17:08:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:10.442 17:08:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:10.442 17:08:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:10.442 17:08:59 -- bdev/nbd_common.sh@41 -- # break 00:30:10.442 17:08:59 -- bdev/nbd_common.sh@45 -- # return 0 00:30:10.442 17:08:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:10.442 17:08:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:10.701 17:08:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:10.701 17:08:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:10.701 17:08:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:10.701 17:08:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:10.701 17:08:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:10.701 17:08:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:10.701 17:08:59 -- bdev/nbd_common.sh@41 -- # break 00:30:10.701 17:08:59 -- bdev/nbd_common.sh@45 -- # return 0 00:30:10.701 17:08:59 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:10.701 17:08:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:10.701 17:08:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:10.701 17:08:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:10.701 17:08:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:10.701 17:08:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:10.958 17:08:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:10.958 17:08:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:30:10.958 17:08:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:10.958 17:08:59 -- bdev/nbd_common.sh@65 -- # true 00:30:10.958 17:08:59 -- bdev/nbd_common.sh@65 -- # count=0 00:30:10.958 17:08:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:30:10.958 17:08:59 -- bdev/nbd_common.sh@104 -- # count=0 00:30:10.958 17:08:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:10.958 17:08:59 -- bdev/nbd_common.sh@109 -- # return 0 00:30:10.958 17:08:59 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:10.958 17:08:59 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:10.958 17:08:59 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:10.958 17:08:59 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:30:10.958 17:08:59 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:30:10.958 17:08:59 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:30:11.216 malloc_lvol_verify 00:30:11.216 17:08:59 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:30:11.473 37bbb8c1-e881-4056-87f0-b9e874b941e5 00:30:11.473 17:09:00 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:30:11.731 7f366f1b-0771-4087-9fe7-ec081fa2887c 00:30:11.731 17:09:00 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:30:11.989 /dev/nbd0 00:30:11.989 17:09:00 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:30:11.989 mke2fs 1.46.5 (30-Dec-2021) 00:30:11.989 00:30:11.989 Filesystem too small for a journal 00:30:11.989 Discarding device blocks: 0/1024 done 00:30:11.989 Creating filesystem with 1024 4k blocks and 1024 inodes 00:30:11.989 00:30:11.989 Allocating group tables: 0/1 done 00:30:11.989 Writing inode tables: 0/1 done 00:30:11.989 Writing superblocks and filesystem accounting information: 0/1 done 00:30:11.989 00:30:11.989 17:09:00 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:30:11.989 17:09:00 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:11.989 17:09:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:11.989 17:09:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:11.989 17:09:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:11.989 17:09:00 -- bdev/nbd_common.sh@51 -- # local i 00:30:11.989 17:09:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:11.989 17:09:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:12.247 17:09:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:12.247 17:09:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:12.247 17:09:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:12.247 17:09:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:12.247 17:09:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:12.247 17:09:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:12.247 17:09:00 -- bdev/nbd_common.sh@41 -- # break 00:30:12.247 17:09:00 -- bdev/nbd_common.sh@45 -- # return 0 00:30:12.247 17:09:00 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:30:12.247 17:09:00 -- bdev/nbd_common.sh@147 -- # return 0 00:30:12.247 17:09:00 -- bdev/blockdev.sh@324 -- # killprocess 137971 00:30:12.247 17:09:00 -- common/autotest_common.sh@936 -- # '[' -z 137971 ']' 00:30:12.247 17:09:00 -- common/autotest_common.sh@940 -- # kill -0 137971 00:30:12.247 17:09:00 -- common/autotest_common.sh@941 -- # uname 00:30:12.247 17:09:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:12.247 17:09:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137971 00:30:12.247 killing process with pid 137971 00:30:12.247 17:09:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:12.247 17:09:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:12.247 17:09:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137971' 00:30:12.247 17:09:00 -- common/autotest_common.sh@955 -- # kill 137971 00:30:12.247 17:09:00 -- common/autotest_common.sh@960 -- # wait 137971 00:30:13.184 ************************************ 00:30:13.184 END TEST bdev_nbd 00:30:13.184 17:09:01 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:30:13.184 00:30:13.184 real 0m6.752s 00:30:13.184 user 0m9.813s 00:30:13.184 sys 0m1.596s 00:30:13.184 17:09:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:13.184 17:09:01 -- common/autotest_common.sh@10 -- # set +x 00:30:13.184 ************************************ 00:30:13.184 17:09:01 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:30:13.184 17:09:01 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:30:13.184 17:09:01 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:30:13.184 17:09:01 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:30:13.184 skipping fio tests on NVMe due to multi-ns failures. 00:30:13.184 17:09:01 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:13.184 17:09:01 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:13.184 17:09:01 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:30:13.184 17:09:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:13.184 17:09:01 -- common/autotest_common.sh@10 -- # set +x 00:30:13.184 ************************************ 00:30:13.184 START TEST bdev_verify 00:30:13.184 ************************************ 00:30:13.184 17:09:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:13.442 [2024-11-05 17:09:02.089605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:13.442 [2024-11-05 17:09:02.090004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138223 ] 00:30:13.442 [2024-11-05 17:09:02.265780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:13.700 [2024-11-05 17:09:02.456661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.700 [2024-11-05 17:09:02.456686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.267 Running I/O for 5 seconds... 00:30:19.532 00:30:19.532 Latency(us) 00:30:19.532 [2024-11-05T17:09:08.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.532 [2024-11-05T17:09:08.409Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:19.532 Verification LBA range: start 0x0 length 0x4ff80 00:30:19.532 Nvme0n1p1 : 5.03 6034.36 23.57 0.00 0.00 21159.41 2412.92 26691.03 00:30:19.532 [2024-11-05T17:09:08.409Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:19.532 Verification LBA range: start 0x4ff80 length 0x4ff80 00:30:19.532 Nvme0n1p1 : 5.03 5978.72 23.35 0.00 0.00 21315.58 3187.43 29193.31 00:30:19.532 [2024-11-05T17:09:08.409Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:19.532 Verification LBA range: start 0x0 length 0x4ff7f 00:30:19.532 Nvme0n1p2 : 5.03 6030.87 23.56 0.00 0.00 21154.31 3574.69 28597.53 00:30:19.532 [2024-11-05T17:09:08.409Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:19.532 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:30:19.532 Nvme0n1p2 : 5.02 5981.60 23.37 0.00 0.00 21341.02 2710.81 27286.81 00:30:19.532 [2024-11-05T17:09:08.409Z] =================================================================================================================== 00:30:19.532 [2024-11-05T17:09:08.409Z] Total : 24025.54 93.85 0.00 0.00 21242.20 2412.92 29193.31 00:30:20.905 00:30:20.905 real 0m7.472s 00:30:20.905 user 0m13.763s 00:30:20.905 sys 0m0.300s 00:30:20.905 17:09:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:20.905 17:09:09 -- common/autotest_common.sh@10 -- # set +x 00:30:20.905 ************************************ 00:30:20.905 END TEST bdev_verify 00:30:20.905 ************************************ 00:30:20.905 17:09:09 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:20.905 17:09:09 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:30:20.905 17:09:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:20.905 17:09:09 -- common/autotest_common.sh@10 -- # set +x 00:30:20.905 ************************************ 00:30:20.905 START TEST bdev_verify_big_io 00:30:20.905 ************************************ 00:30:20.905 17:09:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:20.905 [2024-11-05 17:09:09.599547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:20.905 [2024-11-05 17:09:09.599693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138325 ] 00:30:20.905 [2024-11-05 17:09:09.756163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:21.162 [2024-11-05 17:09:09.934114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.162 [2024-11-05 17:09:09.934120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.727 Running I/O for 5 seconds... 00:30:26.988 00:30:26.988 Latency(us) 00:30:26.988 [2024-11-05T17:09:15.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.988 [2024-11-05T17:09:15.865Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:26.988 Verification LBA range: start 0x0 length 0x4ff8 00:30:26.988 Nvme0n1p1 : 5.08 1049.85 65.62 0.00 0.00 120535.04 7923.90 178257.92 00:30:26.988 [2024-11-05T17:09:15.865Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:26.988 Verification LBA range: start 0x4ff8 length 0x4ff8 00:30:26.988 Nvme0n1p1 : 5.07 1297.43 81.09 0.00 0.00 97727.50 2666.12 144894.14 00:30:26.988 [2024-11-05T17:09:15.865Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:26.988 Verification LBA range: start 0x0 length 0x4ff7 00:30:26.988 Nvme0n1p2 : 5.09 1066.35 66.65 0.00 0.00 117495.78 875.05 160146.15 00:30:26.988 [2024-11-05T17:09:15.865Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:26.988 Verification LBA range: start 0x4ff7 length 0x4ff7 00:30:26.988 Nvme0n1p2 : 5.07 1304.48 81.53 0.00 0.00 96411.88 1184.12 111530.36 00:30:26.988 [2024-11-05T17:09:15.865Z] =================================================================================================================== 00:30:26.988 [2024-11-05T17:09:15.865Z] Total : 4718.11 294.88 0.00 0.00 106919.46 875.05 178257.92 00:30:28.365 ************************************ 00:30:28.365 END TEST bdev_verify_big_io 00:30:28.365 ************************************ 00:30:28.365 00:30:28.365 real 0m7.306s 00:30:28.365 user 0m13.512s 00:30:28.365 sys 0m0.277s 00:30:28.365 17:09:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:28.365 17:09:16 -- common/autotest_common.sh@10 -- # set +x 00:30:28.365 17:09:16 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:28.365 17:09:16 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:30:28.365 17:09:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:28.365 17:09:16 -- common/autotest_common.sh@10 -- # set +x 00:30:28.365 ************************************ 00:30:28.365 START TEST bdev_write_zeroes 00:30:28.365 ************************************ 00:30:28.365 17:09:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:28.365 [2024-11-05 17:09:16.978639] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:28.365 [2024-11-05 17:09:16.978842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138433 ] 00:30:28.365 [2024-11-05 17:09:17.149005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.647 [2024-11-05 17:09:17.315030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.916 Running I/O for 1 seconds... 00:30:29.847 00:30:29.847 Latency(us) 00:30:29.847 [2024-11-05T17:09:18.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.847 [2024-11-05T17:09:18.724Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:29.847 Nvme0n1p1 : 1.01 29495.42 115.22 0.00 0.00 4330.77 2293.76 14179.61 00:30:29.847 [2024-11-05T17:09:18.724Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:29.847 Nvme0n1p2 : 1.01 29409.98 114.88 0.00 0.00 4336.67 2263.97 13941.29 00:30:29.847 [2024-11-05T17:09:18.724Z] =================================================================================================================== 00:30:29.847 [2024-11-05T17:09:18.724Z] Total : 58905.40 230.10 0.00 0.00 4333.72 2263.97 14179.61 00:30:30.781 ************************************ 00:30:30.781 END TEST bdev_write_zeroes 00:30:30.781 ************************************ 00:30:30.781 00:30:30.781 real 0m2.636s 00:30:30.781 user 0m2.295s 00:30:30.781 sys 0m0.241s 00:30:30.781 17:09:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:30.781 17:09:19 -- common/autotest_common.sh@10 -- # set +x 00:30:30.781 17:09:19 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:30.781 17:09:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:30:30.781 17:09:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:30.781 17:09:19 -- common/autotest_common.sh@10 -- # set +x 00:30:30.781 ************************************ 00:30:30.781 START TEST bdev_json_nonenclosed 00:30:30.781 ************************************ 00:30:30.781 17:09:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:30.781 [2024-11-05 17:09:19.642772] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:30.781 [2024-11-05 17:09:19.642950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138490 ] 00:30:31.038 [2024-11-05 17:09:19.792562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.296 [2024-11-05 17:09:19.948584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.296 [2024-11-05 17:09:19.948766] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:31.296 [2024-11-05 17:09:19.948800] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:31.553 00:30:31.553 real 0m0.663s 00:30:31.553 user 0m0.451s 00:30:31.553 sys 0m0.112s 00:30:31.553 17:09:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:31.554 ************************************ 00:30:31.554 END TEST bdev_json_nonenclosed 00:30:31.554 ************************************ 00:30:31.554 17:09:20 -- common/autotest_common.sh@10 -- # set +x 00:30:31.554 17:09:20 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:31.554 17:09:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:30:31.554 17:09:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:31.554 17:09:20 -- common/autotest_common.sh@10 -- # set +x 00:30:31.554 ************************************ 00:30:31.554 START TEST bdev_json_nonarray 00:30:31.554 ************************************ 00:30:31.554 17:09:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:31.554 [2024-11-05 17:09:20.358087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:31.554 [2024-11-05 17:09:20.358257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138529 ] 00:30:31.811 [2024-11-05 17:09:20.509081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.812 [2024-11-05 17:09:20.664484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.812 [2024-11-05 17:09:20.664699] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:31.812 [2024-11-05 17:09:20.664741] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:32.070 00:30:32.070 real 0m0.664s 00:30:32.070 user 0m0.441s 00:30:32.070 sys 0m0.124s 00:30:32.328 17:09:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:32.328 17:09:20 -- common/autotest_common.sh@10 -- # set +x 00:30:32.328 ************************************ 00:30:32.328 END TEST bdev_json_nonarray 00:30:32.328 ************************************ 00:30:32.328 17:09:21 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:30:32.328 17:09:21 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:30:32.328 17:09:21 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:30:32.328 17:09:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:32.328 17:09:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:32.328 17:09:21 -- common/autotest_common.sh@10 -- # set +x 00:30:32.328 ************************************ 00:30:32.328 START TEST bdev_gpt_uuid 00:30:32.328 ************************************ 00:30:32.328 17:09:21 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:30:32.328 17:09:21 -- bdev/blockdev.sh@612 -- # local bdev 00:30:32.328 17:09:21 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:30:32.328 17:09:21 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=138552 00:30:32.328 17:09:21 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:32.328 17:09:21 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:32.328 17:09:21 -- bdev/blockdev.sh@47 -- # waitforlisten 138552 00:30:32.328 17:09:21 -- common/autotest_common.sh@829 -- # '[' -z 138552 ']' 00:30:32.328 17:09:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.328 17:09:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:32.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.329 17:09:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.329 17:09:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:32.329 17:09:21 -- common/autotest_common.sh@10 -- # set +x 00:30:32.329 [2024-11-05 17:09:21.102724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:32.329 [2024-11-05 17:09:21.102945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138552 ] 00:30:32.587 [2024-11-05 17:09:21.269959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.587 [2024-11-05 17:09:21.434136] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:32.587 [2024-11-05 17:09:21.434367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.961 17:09:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:33.961 17:09:22 -- common/autotest_common.sh@862 -- # return 0 00:30:33.961 17:09:22 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:33.961 17:09:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.961 17:09:22 -- common/autotest_common.sh@10 -- # set +x 00:30:33.961 Some configs were skipped because the RPC state that can call them passed over. 00:30:33.961 17:09:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.961 17:09:22 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:30:33.961 17:09:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.961 17:09:22 -- common/autotest_common.sh@10 -- # set +x 00:30:34.219 17:09:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.219 17:09:22 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:30:34.219 17:09:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.219 17:09:22 -- common/autotest_common.sh@10 -- # set +x 00:30:34.219 17:09:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.219 17:09:22 -- bdev/blockdev.sh@619 -- # bdev='[ 00:30:34.219 { 00:30:34.219 "name": "Nvme0n1p1", 00:30:34.219 "aliases": [ 00:30:34.219 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:30:34.219 ], 00:30:34.219 "product_name": "GPT Disk", 00:30:34.219 "block_size": 4096, 00:30:34.219 "num_blocks": 655104, 00:30:34.219 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:34.219 "assigned_rate_limits": { 00:30:34.219 "rw_ios_per_sec": 0, 00:30:34.219 "rw_mbytes_per_sec": 0, 00:30:34.219 "r_mbytes_per_sec": 0, 00:30:34.219 "w_mbytes_per_sec": 0 00:30:34.219 }, 00:30:34.219 "claimed": false, 00:30:34.219 "zoned": false, 00:30:34.219 "supported_io_types": { 00:30:34.219 "read": true, 00:30:34.219 "write": true, 00:30:34.219 "unmap": true, 00:30:34.219 "write_zeroes": true, 00:30:34.219 "flush": true, 00:30:34.219 "reset": true, 00:30:34.219 "compare": true, 00:30:34.219 "compare_and_write": false, 00:30:34.219 "abort": true, 00:30:34.219 "nvme_admin": false, 00:30:34.219 "nvme_io": false 00:30:34.219 }, 00:30:34.219 "driver_specific": { 00:30:34.219 "gpt": { 00:30:34.219 "base_bdev": "Nvme0n1", 00:30:34.219 "offset_blocks": 256, 00:30:34.219 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:30:34.219 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:34.219 "partition_name": "SPDK_TEST_first" 00:30:34.219 } 00:30:34.219 } 00:30:34.219 } 00:30:34.219 ]' 00:30:34.219 17:09:22 -- bdev/blockdev.sh@620 -- # jq -r length 00:30:34.219 17:09:22 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:30:34.219 17:09:22 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:30:34.219 17:09:22 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:34.219 17:09:22 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:34.219 17:09:23 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:34.219 17:09:23 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:30:34.219 17:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.219 17:09:23 -- common/autotest_common.sh@10 -- # set +x 00:30:34.219 17:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.219 17:09:23 -- bdev/blockdev.sh@624 -- # bdev='[ 00:30:34.219 { 00:30:34.219 "name": "Nvme0n1p2", 00:30:34.219 "aliases": [ 00:30:34.219 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:30:34.219 ], 00:30:34.219 "product_name": "GPT Disk", 00:30:34.219 "block_size": 4096, 00:30:34.219 "num_blocks": 655103, 00:30:34.219 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:34.219 "assigned_rate_limits": { 00:30:34.219 "rw_ios_per_sec": 0, 00:30:34.219 "rw_mbytes_per_sec": 0, 00:30:34.219 "r_mbytes_per_sec": 0, 00:30:34.219 "w_mbytes_per_sec": 0 00:30:34.219 }, 00:30:34.219 "claimed": false, 00:30:34.219 "zoned": false, 00:30:34.219 "supported_io_types": { 00:30:34.219 "read": true, 00:30:34.219 "write": true, 00:30:34.219 "unmap": true, 00:30:34.219 "write_zeroes": true, 00:30:34.219 "flush": true, 00:30:34.219 "reset": true, 00:30:34.219 "compare": true, 00:30:34.219 "compare_and_write": false, 00:30:34.219 "abort": true, 00:30:34.219 "nvme_admin": false, 00:30:34.219 "nvme_io": false 00:30:34.219 }, 00:30:34.219 "driver_specific": { 00:30:34.219 "gpt": { 00:30:34.219 "base_bdev": "Nvme0n1", 00:30:34.219 "offset_blocks": 655360, 00:30:34.219 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:30:34.219 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:34.219 "partition_name": "SPDK_TEST_second" 00:30:34.219 } 00:30:34.219 } 00:30:34.219 } 00:30:34.219 ]' 00:30:34.219 17:09:23 -- bdev/blockdev.sh@625 -- # jq -r length 00:30:34.219 17:09:23 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:30:34.219 17:09:23 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:30:34.478 17:09:23 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:34.478 17:09:23 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:34.478 17:09:23 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:34.478 17:09:23 -- bdev/blockdev.sh@629 -- # killprocess 138552 00:30:34.478 17:09:23 -- common/autotest_common.sh@936 -- # '[' -z 138552 ']' 00:30:34.478 17:09:23 -- common/autotest_common.sh@940 -- # kill -0 138552 00:30:34.478 17:09:23 -- common/autotest_common.sh@941 -- # uname 00:30:34.478 17:09:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:34.478 17:09:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138552 00:30:34.478 17:09:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:34.478 17:09:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:34.478 killing process with pid 138552 00:30:34.478 17:09:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138552' 00:30:34.478 17:09:23 -- common/autotest_common.sh@955 -- # kill 138552 00:30:34.478 17:09:23 -- common/autotest_common.sh@960 -- # wait 138552 00:30:36.379 00:30:36.379 real 0m3.893s 00:30:36.379 user 0m4.314s 00:30:36.379 sys 0m0.519s 00:30:36.379 17:09:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:36.379 17:09:24 -- common/autotest_common.sh@10 -- # set +x 00:30:36.379 ************************************ 00:30:36.379 END TEST bdev_gpt_uuid 00:30:36.379 ************************************ 00:30:36.379 17:09:24 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:30:36.379 17:09:24 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:30:36.379 17:09:24 -- bdev/blockdev.sh@809 -- # cleanup 00:30:36.379 17:09:24 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:36.379 17:09:24 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:36.380 17:09:24 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:30:36.380 17:09:24 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:30:36.380 17:09:24 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:30:36.380 17:09:24 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:36.380 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:36.638 Waiting for block devices as requested 00:30:36.638 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:36.638 17:09:25 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:30:36.638 17:09:25 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:30:36.638 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:30:36.638 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:30:36.638 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:30:36.638 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:30:36.638 17:09:25 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:30:36.638 00:30:36.638 real 0m42.153s 00:30:36.638 user 0m59.982s 00:30:36.638 sys 0m6.090s 00:30:36.638 17:09:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:36.638 17:09:25 -- common/autotest_common.sh@10 -- # set +x 00:30:36.638 ************************************ 00:30:36.638 END TEST blockdev_nvme_gpt 00:30:36.638 ************************************ 00:30:36.638 17:09:25 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:36.638 17:09:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:36.638 17:09:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:36.638 17:09:25 -- common/autotest_common.sh@10 -- # set +x 00:30:36.638 ************************************ 00:30:36.638 START TEST nvme 00:30:36.638 ************************************ 00:30:36.638 17:09:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:36.896 * Looking for test storage... 00:30:36.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:30:36.896 17:09:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:30:36.896 17:09:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:30:36.896 17:09:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:30:36.896 17:09:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:30:36.896 17:09:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:30:36.896 17:09:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:30:36.896 17:09:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:30:36.896 17:09:25 -- scripts/common.sh@335 -- # IFS=.-: 00:30:36.896 17:09:25 -- scripts/common.sh@335 -- # read -ra ver1 00:30:36.896 17:09:25 -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.896 17:09:25 -- scripts/common.sh@336 -- # read -ra ver2 00:30:36.896 17:09:25 -- scripts/common.sh@337 -- # local 'op=<' 00:30:36.896 17:09:25 -- scripts/common.sh@339 -- # ver1_l=2 00:30:36.896 17:09:25 -- scripts/common.sh@340 -- # ver2_l=1 00:30:36.896 17:09:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:30:36.896 17:09:25 -- scripts/common.sh@343 -- # case "$op" in 00:30:36.896 17:09:25 -- scripts/common.sh@344 -- # : 1 00:30:36.896 17:09:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:30:36.896 17:09:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.896 17:09:25 -- scripts/common.sh@364 -- # decimal 1 00:30:36.896 17:09:25 -- scripts/common.sh@352 -- # local d=1 00:30:36.896 17:09:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.896 17:09:25 -- scripts/common.sh@354 -- # echo 1 00:30:36.896 17:09:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:30:36.896 17:09:25 -- scripts/common.sh@365 -- # decimal 2 00:30:36.896 17:09:25 -- scripts/common.sh@352 -- # local d=2 00:30:36.896 17:09:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.896 17:09:25 -- scripts/common.sh@354 -- # echo 2 00:30:36.896 17:09:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:30:36.896 17:09:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:30:36.896 17:09:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:30:36.896 17:09:25 -- scripts/common.sh@367 -- # return 0 00:30:36.896 17:09:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.896 17:09:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:30:36.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.896 --rc genhtml_branch_coverage=1 00:30:36.896 --rc genhtml_function_coverage=1 00:30:36.896 --rc genhtml_legend=1 00:30:36.896 --rc geninfo_all_blocks=1 00:30:36.896 --rc geninfo_unexecuted_blocks=1 00:30:36.896 00:30:36.896 ' 00:30:36.896 17:09:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:30:36.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.896 --rc genhtml_branch_coverage=1 00:30:36.896 --rc genhtml_function_coverage=1 00:30:36.896 --rc genhtml_legend=1 00:30:36.896 --rc geninfo_all_blocks=1 00:30:36.896 --rc geninfo_unexecuted_blocks=1 00:30:36.896 00:30:36.896 ' 00:30:36.896 17:09:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:30:36.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.896 --rc genhtml_branch_coverage=1 00:30:36.896 --rc genhtml_function_coverage=1 00:30:36.896 --rc genhtml_legend=1 00:30:36.896 --rc geninfo_all_blocks=1 00:30:36.896 --rc geninfo_unexecuted_blocks=1 00:30:36.896 00:30:36.896 ' 00:30:36.896 17:09:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:30:36.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.896 --rc genhtml_branch_coverage=1 00:30:36.896 --rc genhtml_function_coverage=1 00:30:36.896 --rc genhtml_legend=1 00:30:36.896 --rc geninfo_all_blocks=1 00:30:36.896 --rc geninfo_unexecuted_blocks=1 00:30:36.896 00:30:36.896 ' 00:30:36.896 17:09:25 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:37.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:37.413 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:30:38.787 17:09:27 -- nvme/nvme.sh@79 -- # uname 00:30:38.787 17:09:27 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:30:38.787 17:09:27 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:30:38.787 17:09:27 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:30:38.787 17:09:27 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:30:38.787 17:09:27 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:30:38.787 17:09:27 -- common/autotest_common.sh@1055 -- # echo 0 00:30:38.787 17:09:27 -- common/autotest_common.sh@1057 -- # stubpid=138977 00:30:38.787 Waiting for stub to ready for secondary processes... 00:30:38.787 17:09:27 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:30:38.787 17:09:27 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:30:38.787 17:09:27 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:38.787 17:09:27 -- common/autotest_common.sh@1061 -- # [[ -e /proc/138977 ]] 00:30:38.787 17:09:27 -- common/autotest_common.sh@1062 -- # sleep 1s 00:30:38.787 [2024-11-05 17:09:27.314258] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:38.787 [2024-11-05 17:09:27.315079] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.722 17:09:28 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:39.722 17:09:28 -- common/autotest_common.sh@1061 -- # [[ -e /proc/138977 ]] 00:30:39.722 17:09:28 -- common/autotest_common.sh@1062 -- # sleep 1s 00:30:39.722 [2024-11-05 17:09:28.596403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:39.980 [2024-11-05 17:09:28.816111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:39.980 [2024-11-05 17:09:28.816247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.980 [2024-11-05 17:09:28.816245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:39.980 [2024-11-05 17:09:28.830555] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:39.980 [2024-11-05 17:09:28.840994] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:30:39.980 [2024-11-05 17:09:28.841528] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:30:40.547 17:09:29 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:40.547 done. 00:30:40.547 17:09:29 -- common/autotest_common.sh@1064 -- # echo done. 00:30:40.547 17:09:29 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:40.547 17:09:29 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:30:40.547 17:09:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:40.547 17:09:29 -- common/autotest_common.sh@10 -- # set +x 00:30:40.547 ************************************ 00:30:40.547 START TEST nvme_reset 00:30:40.547 ************************************ 00:30:40.547 17:09:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:40.805 Initializing NVMe Controllers 00:30:40.805 Skipping QEMU NVMe SSD at 0000:00:06.0 00:30:40.805 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:30:40.805 00:30:40.805 real 0m0.290s 00:30:40.805 user 0m0.075s 00:30:40.805 sys 0m0.125s 00:30:40.805 17:09:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:40.805 ************************************ 00:30:40.805 17:09:29 -- common/autotest_common.sh@10 -- # set +x 00:30:40.805 END TEST nvme_reset 00:30:40.805 ************************************ 00:30:40.805 17:09:29 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:30:40.805 17:09:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:40.805 17:09:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:40.805 17:09:29 -- common/autotest_common.sh@10 -- # set +x 00:30:40.805 ************************************ 00:30:40.805 START TEST nvme_identify 00:30:40.805 ************************************ 00:30:40.805 17:09:29 -- common/autotest_common.sh@1114 -- # nvme_identify 00:30:40.805 17:09:29 -- nvme/nvme.sh@12 -- # bdfs=() 00:30:40.805 17:09:29 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:30:40.805 17:09:29 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:30:40.805 17:09:29 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:30:40.805 17:09:29 -- common/autotest_common.sh@1508 -- # bdfs=() 00:30:40.805 17:09:29 -- common/autotest_common.sh@1508 -- # local bdfs 00:30:40.805 17:09:29 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:40.805 17:09:29 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:40.805 17:09:29 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:30:40.805 17:09:29 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:30:40.805 17:09:29 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:30:40.805 17:09:29 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:30:41.064 [2024-11-05 17:09:29.940302] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 139011 terminated unexpected 00:30:41.064 ===================================================== 00:30:41.064 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:41.064 ===================================================== 00:30:41.064 Controller Capabilities/Features 00:30:41.064 ================================ 00:30:41.064 Vendor ID: 1b36 00:30:41.064 Subsystem Vendor ID: 1af4 00:30:41.064 Serial Number: 12340 00:30:41.064 Model Number: QEMU NVMe Ctrl 00:30:41.064 Firmware Version: 8.0.0 00:30:41.064 Recommended Arb Burst: 6 00:30:41.064 IEEE OUI Identifier: 00 54 52 00:30:41.064 Multi-path I/O 00:30:41.064 May have multiple subsystem ports: No 00:30:41.064 May have multiple controllers: No 00:30:41.064 Associated with SR-IOV VF: No 00:30:41.064 Max Data Transfer Size: 524288 00:30:41.064 Max Number of Namespaces: 256 00:30:41.064 Max Number of I/O Queues: 64 00:30:41.064 NVMe Specification Version (VS): 1.4 00:30:41.064 NVMe Specification Version (Identify): 1.4 00:30:41.064 Maximum Queue Entries: 2048 00:30:41.064 Contiguous Queues Required: Yes 00:30:41.064 Arbitration Mechanisms Supported 00:30:41.064 Weighted Round Robin: Not Supported 00:30:41.064 Vendor Specific: Not Supported 00:30:41.064 Reset Timeout: 7500 ms 00:30:41.064 Doorbell Stride: 4 bytes 00:30:41.064 NVM Subsystem Reset: Not Supported 00:30:41.064 Command Sets Supported 00:30:41.064 NVM Command Set: Supported 00:30:41.064 Boot Partition: Not Supported 00:30:41.064 Memory Page Size Minimum: 4096 bytes 00:30:41.064 Memory Page Size Maximum: 65536 bytes 00:30:41.064 Persistent Memory Region: Not Supported 00:30:41.064 Optional Asynchronous Events Supported 00:30:41.064 Namespace Attribute Notices: Supported 00:30:41.064 Firmware Activation Notices: Not Supported 00:30:41.064 ANA Change Notices: Not Supported 00:30:41.064 PLE Aggregate Log Change Notices: Not Supported 00:30:41.064 LBA Status Info Alert Notices: Not Supported 00:30:41.064 EGE Aggregate Log Change Notices: Not Supported 00:30:41.064 Normal NVM Subsystem Shutdown event: Not Supported 00:30:41.064 Zone Descriptor Change Notices: Not Supported 00:30:41.064 Discovery Log Change Notices: Not Supported 00:30:41.064 Controller Attributes 00:30:41.064 128-bit Host Identifier: Not Supported 00:30:41.064 Non-Operational Permissive Mode: Not Supported 00:30:41.064 NVM Sets: Not Supported 00:30:41.064 Read Recovery Levels: Not Supported 00:30:41.064 Endurance Groups: Not Supported 00:30:41.064 Predictable Latency Mode: Not Supported 00:30:41.064 Traffic Based Keep ALive: Not Supported 00:30:41.064 Namespace Granularity: Not Supported 00:30:41.064 SQ Associations: Not Supported 00:30:41.064 UUID List: Not Supported 00:30:41.064 Multi-Domain Subsystem: Not Supported 00:30:41.064 Fixed Capacity Management: Not Supported 00:30:41.064 Variable Capacity Management: Not Supported 00:30:41.064 Delete Endurance Group: Not Supported 00:30:41.064 Delete NVM Set: Not Supported 00:30:41.064 Extended LBA Formats Supported: Supported 00:30:41.064 Flexible Data Placement Supported: Not Supported 00:30:41.064 00:30:41.064 Controller Memory Buffer Support 00:30:41.064 ================================ 00:30:41.064 Supported: No 00:30:41.064 00:30:41.064 Persistent Memory Region Support 00:30:41.064 ================================ 00:30:41.064 Supported: No 00:30:41.064 00:30:41.064 Admin Command Set Attributes 00:30:41.064 ============================ 00:30:41.064 Security Send/Receive: Not Supported 00:30:41.064 Format NVM: Supported 00:30:41.064 Firmware Activate/Download: Not Supported 00:30:41.064 Namespace Management: Supported 00:30:41.064 Device Self-Test: Not Supported 00:30:41.064 Directives: Supported 00:30:41.064 NVMe-MI: Not Supported 00:30:41.064 Virtualization Management: Not Supported 00:30:41.064 Doorbell Buffer Config: Supported 00:30:41.064 Get LBA Status Capability: Not Supported 00:30:41.064 Command & Feature Lockdown Capability: Not Supported 00:30:41.064 Abort Command Limit: 4 00:30:41.064 Async Event Request Limit: 4 00:30:41.064 Number of Firmware Slots: N/A 00:30:41.064 Firmware Slot 1 Read-Only: N/A 00:30:41.064 Firmware Activation Without Reset: N/A 00:30:41.065 Multiple Update Detection Support: N/A 00:30:41.065 Firmware Update Granularity: No Information Provided 00:30:41.065 Per-Namespace SMART Log: Yes 00:30:41.065 Asymmetric Namespace Access Log Page: Not Supported 00:30:41.065 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:41.065 Command Effects Log Page: Supported 00:30:41.065 Get Log Page Extended Data: Supported 00:30:41.065 Telemetry Log Pages: Not Supported 00:30:41.065 Persistent Event Log Pages: Not Supported 00:30:41.065 Supported Log Pages Log Page: May Support 00:30:41.065 Commands Supported & Effects Log Page: Not Supported 00:30:41.065 Feature Identifiers & Effects Log Page:May Support 00:30:41.065 NVMe-MI Commands & Effects Log Page: May Support 00:30:41.065 Data Area 4 for Telemetry Log: Not Supported 00:30:41.065 Error Log Page Entries Supported: 1 00:30:41.065 Keep Alive: Not Supported 00:30:41.065 00:30:41.065 NVM Command Set Attributes 00:30:41.065 ========================== 00:30:41.065 Submission Queue Entry Size 00:30:41.065 Max: 64 00:30:41.065 Min: 64 00:30:41.065 Completion Queue Entry Size 00:30:41.065 Max: 16 00:30:41.065 Min: 16 00:30:41.065 Number of Namespaces: 256 00:30:41.065 Compare Command: Supported 00:30:41.065 Write Uncorrectable Command: Not Supported 00:30:41.065 Dataset Management Command: Supported 00:30:41.065 Write Zeroes Command: Supported 00:30:41.065 Set Features Save Field: Supported 00:30:41.065 Reservations: Not Supported 00:30:41.065 Timestamp: Supported 00:30:41.065 Copy: Supported 00:30:41.065 Volatile Write Cache: Present 00:30:41.065 Atomic Write Unit (Normal): 1 00:30:41.065 Atomic Write Unit (PFail): 1 00:30:41.065 Atomic Compare & Write Unit: 1 00:30:41.065 Fused Compare & Write: Not Supported 00:30:41.065 Scatter-Gather List 00:30:41.065 SGL Command Set: Supported 00:30:41.065 SGL Keyed: Not Supported 00:30:41.065 SGL Bit Bucket Descriptor: Not Supported 00:30:41.065 SGL Metadata Pointer: Not Supported 00:30:41.065 Oversized SGL: Not Supported 00:30:41.065 SGL Metadata Address: Not Supported 00:30:41.065 SGL Offset: Not Supported 00:30:41.065 Transport SGL Data Block: Not Supported 00:30:41.065 Replay Protected Memory Block: Not Supported 00:30:41.065 00:30:41.065 Firmware Slot Information 00:30:41.065 ========================= 00:30:41.065 Active slot: 1 00:30:41.065 Slot 1 Firmware Revision: 1.0 00:30:41.065 00:30:41.065 00:30:41.065 Commands Supported and Effects 00:30:41.065 ============================== 00:30:41.065 Admin Commands 00:30:41.065 -------------- 00:30:41.065 Delete I/O Submission Queue (00h): Supported 00:30:41.065 Create I/O Submission Queue (01h): Supported 00:30:41.065 Get Log Page (02h): Supported 00:30:41.065 Delete I/O Completion Queue (04h): Supported 00:30:41.065 Create I/O Completion Queue (05h): Supported 00:30:41.065 Identify (06h): Supported 00:30:41.065 Abort (08h): Supported 00:30:41.065 Set Features (09h): Supported 00:30:41.065 Get Features (0Ah): Supported 00:30:41.065 Asynchronous Event Request (0Ch): Supported 00:30:41.065 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:41.065 Directive Send (19h): Supported 00:30:41.065 Directive Receive (1Ah): Supported 00:30:41.065 Virtualization Management (1Ch): Supported 00:30:41.065 Doorbell Buffer Config (7Ch): Supported 00:30:41.065 Format NVM (80h): Supported LBA-Change 00:30:41.065 I/O Commands 00:30:41.065 ------------ 00:30:41.065 Flush (00h): Supported LBA-Change 00:30:41.065 Write (01h): Supported LBA-Change 00:30:41.065 Read (02h): Supported 00:30:41.065 Compare (05h): Supported 00:30:41.065 Write Zeroes (08h): Supported LBA-Change 00:30:41.065 Dataset Management (09h): Supported LBA-Change 00:30:41.065 Unknown (0Ch): Supported 00:30:41.065 Unknown (12h): Supported 00:30:41.065 Copy (19h): Supported LBA-Change 00:30:41.065 Unknown (1Dh): Supported LBA-Change 00:30:41.065 00:30:41.065 Error Log 00:30:41.065 ========= 00:30:41.065 00:30:41.065 Arbitration 00:30:41.065 =========== 00:30:41.065 Arbitration Burst: no limit 00:30:41.065 00:30:41.065 Power Management 00:30:41.065 ================ 00:30:41.065 Number of Power States: 1 00:30:41.065 Current Power State: Power State #0 00:30:41.065 Power State #0: 00:30:41.065 Max Power: 25.00 W 00:30:41.065 Non-Operational State: Operational 00:30:41.065 Entry Latency: 16 microseconds 00:30:41.065 Exit Latency: 4 microseconds 00:30:41.065 Relative Read Throughput: 0 00:30:41.065 Relative Read Latency: 0 00:30:41.065 Relative Write Throughput: 0 00:30:41.065 Relative Write Latency: 0 00:30:41.324 Idle Power: Not Reported 00:30:41.324 Active Power: Not Reported 00:30:41.324 Non-Operational Permissive Mode: Not Supported 00:30:41.324 00:30:41.324 Health Information 00:30:41.324 ================== 00:30:41.324 Critical Warnings: 00:30:41.324 Available Spare Space: OK 00:30:41.324 Temperature: OK 00:30:41.324 Device Reliability: OK 00:30:41.324 Read Only: No 00:30:41.324 Volatile Memory Backup: OK 00:30:41.324 Current Temperature: 323 Kelvin (50 Celsius) 00:30:41.324 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:41.324 Available Spare: 0% 00:30:41.324 Available Spare Threshold: 0% 00:30:41.324 Life Percentage Used: 0% 00:30:41.324 Data Units Read: 8692 00:30:41.324 Data Units Written: 4239 00:30:41.324 Host Read Commands: 307937 00:30:41.324 Host Write Commands: 169090 00:30:41.324 Controller Busy Time: 0 minutes 00:30:41.324 Power Cycles: 0 00:30:41.324 Power On Hours: 0 hours 00:30:41.324 Unsafe Shutdowns: 0 00:30:41.324 Unrecoverable Media Errors: 0 00:30:41.324 Lifetime Error Log Entries: 0 00:30:41.324 Warning Temperature Time: 0 minutes 00:30:41.324 Critical Temperature Time: 0 minutes 00:30:41.324 00:30:41.324 Number of Queues 00:30:41.324 ================ 00:30:41.324 Number of I/O Submission Queues: 64 00:30:41.324 Number of I/O Completion Queues: 64 00:30:41.324 00:30:41.324 ZNS Specific Controller Data 00:30:41.324 ============================ 00:30:41.324 Zone Append Size Limit: 0 00:30:41.324 00:30:41.324 00:30:41.324 Active Namespaces 00:30:41.324 ================= 00:30:41.324 Namespace ID:1 00:30:41.324 Error Recovery Timeout: Unlimited 00:30:41.324 Command Set Identifier: NVM (00h) 00:30:41.324 Deallocate: Supported 00:30:41.324 Deallocated/Unwritten Error: Supported 00:30:41.324 Deallocated Read Value: All 0x00 00:30:41.324 Deallocate in Write Zeroes: Not Supported 00:30:41.324 Deallocated Guard Field: 0xFFFF 00:30:41.324 Flush: Supported 00:30:41.324 Reservation: Not Supported 00:30:41.324 Namespace Sharing Capabilities: Private 00:30:41.324 Size (in LBAs): 1310720 (5GiB) 00:30:41.324 Capacity (in LBAs): 1310720 (5GiB) 00:30:41.324 Utilization (in LBAs): 1310720 (5GiB) 00:30:41.324 Thin Provisioning: Not Supported 00:30:41.324 Per-NS Atomic Units: No 00:30:41.324 Maximum Single Source Range Length: 128 00:30:41.324 Maximum Copy Length: 128 00:30:41.324 Maximum Source Range Count: 128 00:30:41.324 NGUID/EUI64 Never Reused: No 00:30:41.324 Namespace Write Protected: No 00:30:41.324 Number of LBA Formats: 8 00:30:41.324 Current LBA Format: LBA Format #04 00:30:41.324 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:41.324 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:41.324 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:41.324 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:41.324 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:41.324 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:41.324 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:41.324 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:41.324 00:30:41.324 17:09:29 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:41.324 17:09:29 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:30:41.584 ===================================================== 00:30:41.584 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:41.584 ===================================================== 00:30:41.584 Controller Capabilities/Features 00:30:41.584 ================================ 00:30:41.584 Vendor ID: 1b36 00:30:41.584 Subsystem Vendor ID: 1af4 00:30:41.584 Serial Number: 12340 00:30:41.584 Model Number: QEMU NVMe Ctrl 00:30:41.584 Firmware Version: 8.0.0 00:30:41.584 Recommended Arb Burst: 6 00:30:41.584 IEEE OUI Identifier: 00 54 52 00:30:41.584 Multi-path I/O 00:30:41.584 May have multiple subsystem ports: No 00:30:41.584 May have multiple controllers: No 00:30:41.584 Associated with SR-IOV VF: No 00:30:41.584 Max Data Transfer Size: 524288 00:30:41.584 Max Number of Namespaces: 256 00:30:41.584 Max Number of I/O Queues: 64 00:30:41.584 NVMe Specification Version (VS): 1.4 00:30:41.584 NVMe Specification Version (Identify): 1.4 00:30:41.584 Maximum Queue Entries: 2048 00:30:41.584 Contiguous Queues Required: Yes 00:30:41.584 Arbitration Mechanisms Supported 00:30:41.584 Weighted Round Robin: Not Supported 00:30:41.584 Vendor Specific: Not Supported 00:30:41.584 Reset Timeout: 7500 ms 00:30:41.584 Doorbell Stride: 4 bytes 00:30:41.584 NVM Subsystem Reset: Not Supported 00:30:41.584 Command Sets Supported 00:30:41.584 NVM Command Set: Supported 00:30:41.584 Boot Partition: Not Supported 00:30:41.584 Memory Page Size Minimum: 4096 bytes 00:30:41.584 Memory Page Size Maximum: 65536 bytes 00:30:41.584 Persistent Memory Region: Not Supported 00:30:41.584 Optional Asynchronous Events Supported 00:30:41.584 Namespace Attribute Notices: Supported 00:30:41.584 Firmware Activation Notices: Not Supported 00:30:41.584 ANA Change Notices: Not Supported 00:30:41.584 PLE Aggregate Log Change Notices: Not Supported 00:30:41.584 LBA Status Info Alert Notices: Not Supported 00:30:41.584 EGE Aggregate Log Change Notices: Not Supported 00:30:41.584 Normal NVM Subsystem Shutdown event: Not Supported 00:30:41.584 Zone Descriptor Change Notices: Not Supported 00:30:41.584 Discovery Log Change Notices: Not Supported 00:30:41.584 Controller Attributes 00:30:41.584 128-bit Host Identifier: Not Supported 00:30:41.584 Non-Operational Permissive Mode: Not Supported 00:30:41.584 NVM Sets: Not Supported 00:30:41.584 Read Recovery Levels: Not Supported 00:30:41.584 Endurance Groups: Not Supported 00:30:41.584 Predictable Latency Mode: Not Supported 00:30:41.584 Traffic Based Keep ALive: Not Supported 00:30:41.584 Namespace Granularity: Not Supported 00:30:41.584 SQ Associations: Not Supported 00:30:41.584 UUID List: Not Supported 00:30:41.584 Multi-Domain Subsystem: Not Supported 00:30:41.584 Fixed Capacity Management: Not Supported 00:30:41.584 Variable Capacity Management: Not Supported 00:30:41.584 Delete Endurance Group: Not Supported 00:30:41.584 Delete NVM Set: Not Supported 00:30:41.584 Extended LBA Formats Supported: Supported 00:30:41.584 Flexible Data Placement Supported: Not Supported 00:30:41.584 00:30:41.584 Controller Memory Buffer Support 00:30:41.584 ================================ 00:30:41.584 Supported: No 00:30:41.584 00:30:41.584 Persistent Memory Region Support 00:30:41.584 ================================ 00:30:41.584 Supported: No 00:30:41.584 00:30:41.584 Admin Command Set Attributes 00:30:41.584 ============================ 00:30:41.584 Security Send/Receive: Not Supported 00:30:41.584 Format NVM: Supported 00:30:41.584 Firmware Activate/Download: Not Supported 00:30:41.584 Namespace Management: Supported 00:30:41.584 Device Self-Test: Not Supported 00:30:41.584 Directives: Supported 00:30:41.584 NVMe-MI: Not Supported 00:30:41.584 Virtualization Management: Not Supported 00:30:41.584 Doorbell Buffer Config: Supported 00:30:41.584 Get LBA Status Capability: Not Supported 00:30:41.584 Command & Feature Lockdown Capability: Not Supported 00:30:41.584 Abort Command Limit: 4 00:30:41.584 Async Event Request Limit: 4 00:30:41.584 Number of Firmware Slots: N/A 00:30:41.584 Firmware Slot 1 Read-Only: N/A 00:30:41.584 Firmware Activation Without Reset: N/A 00:30:41.584 Multiple Update Detection Support: N/A 00:30:41.584 Firmware Update Granularity: No Information Provided 00:30:41.584 Per-Namespace SMART Log: Yes 00:30:41.584 Asymmetric Namespace Access Log Page: Not Supported 00:30:41.584 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:41.585 Command Effects Log Page: Supported 00:30:41.585 Get Log Page Extended Data: Supported 00:30:41.585 Telemetry Log Pages: Not Supported 00:30:41.585 Persistent Event Log Pages: Not Supported 00:30:41.585 Supported Log Pages Log Page: May Support 00:30:41.585 Commands Supported & Effects Log Page: Not Supported 00:30:41.585 Feature Identifiers & Effects Log Page:May Support 00:30:41.585 NVMe-MI Commands & Effects Log Page: May Support 00:30:41.585 Data Area 4 for Telemetry Log: Not Supported 00:30:41.585 Error Log Page Entries Supported: 1 00:30:41.585 Keep Alive: Not Supported 00:30:41.585 00:30:41.585 NVM Command Set Attributes 00:30:41.585 ========================== 00:30:41.585 Submission Queue Entry Size 00:30:41.585 Max: 64 00:30:41.585 Min: 64 00:30:41.585 Completion Queue Entry Size 00:30:41.585 Max: 16 00:30:41.585 Min: 16 00:30:41.585 Number of Namespaces: 256 00:30:41.585 Compare Command: Supported 00:30:41.585 Write Uncorrectable Command: Not Supported 00:30:41.585 Dataset Management Command: Supported 00:30:41.585 Write Zeroes Command: Supported 00:30:41.585 Set Features Save Field: Supported 00:30:41.585 Reservations: Not Supported 00:30:41.585 Timestamp: Supported 00:30:41.585 Copy: Supported 00:30:41.585 Volatile Write Cache: Present 00:30:41.585 Atomic Write Unit (Normal): 1 00:30:41.585 Atomic Write Unit (PFail): 1 00:30:41.585 Atomic Compare & Write Unit: 1 00:30:41.585 Fused Compare & Write: Not Supported 00:30:41.585 Scatter-Gather List 00:30:41.585 SGL Command Set: Supported 00:30:41.585 SGL Keyed: Not Supported 00:30:41.585 SGL Bit Bucket Descriptor: Not Supported 00:30:41.585 SGL Metadata Pointer: Not Supported 00:30:41.585 Oversized SGL: Not Supported 00:30:41.585 SGL Metadata Address: Not Supported 00:30:41.585 SGL Offset: Not Supported 00:30:41.585 Transport SGL Data Block: Not Supported 00:30:41.585 Replay Protected Memory Block: Not Supported 00:30:41.585 00:30:41.585 Firmware Slot Information 00:30:41.585 ========================= 00:30:41.585 Active slot: 1 00:30:41.585 Slot 1 Firmware Revision: 1.0 00:30:41.585 00:30:41.585 00:30:41.585 Commands Supported and Effects 00:30:41.585 ============================== 00:30:41.585 Admin Commands 00:30:41.585 -------------- 00:30:41.585 Delete I/O Submission Queue (00h): Supported 00:30:41.585 Create I/O Submission Queue (01h): Supported 00:30:41.585 Get Log Page (02h): Supported 00:30:41.585 Delete I/O Completion Queue (04h): Supported 00:30:41.585 Create I/O Completion Queue (05h): Supported 00:30:41.585 Identify (06h): Supported 00:30:41.585 Abort (08h): Supported 00:30:41.585 Set Features (09h): Supported 00:30:41.585 Get Features (0Ah): Supported 00:30:41.585 Asynchronous Event Request (0Ch): Supported 00:30:41.585 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:41.585 Directive Send (19h): Supported 00:30:41.585 Directive Receive (1Ah): Supported 00:30:41.585 Virtualization Management (1Ch): Supported 00:30:41.585 Doorbell Buffer Config (7Ch): Supported 00:30:41.585 Format NVM (80h): Supported LBA-Change 00:30:41.585 I/O Commands 00:30:41.585 ------------ 00:30:41.585 Flush (00h): Supported LBA-Change 00:30:41.585 Write (01h): Supported LBA-Change 00:30:41.585 Read (02h): Supported 00:30:41.585 Compare (05h): Supported 00:30:41.585 Write Zeroes (08h): Supported LBA-Change 00:30:41.585 Dataset Management (09h): Supported LBA-Change 00:30:41.585 Unknown (0Ch): Supported 00:30:41.585 Unknown (12h): Supported 00:30:41.585 Copy (19h): Supported LBA-Change 00:30:41.585 Unknown (1Dh): Supported LBA-Change 00:30:41.585 00:30:41.585 Error Log 00:30:41.585 ========= 00:30:41.585 00:30:41.585 Arbitration 00:30:41.585 =========== 00:30:41.585 Arbitration Burst: no limit 00:30:41.585 00:30:41.585 Power Management 00:30:41.585 ================ 00:30:41.585 Number of Power States: 1 00:30:41.585 Current Power State: Power State #0 00:30:41.585 Power State #0: 00:30:41.585 Max Power: 25.00 W 00:30:41.585 Non-Operational State: Operational 00:30:41.585 Entry Latency: 16 microseconds 00:30:41.585 Exit Latency: 4 microseconds 00:30:41.585 Relative Read Throughput: 0 00:30:41.585 Relative Read Latency: 0 00:30:41.585 Relative Write Throughput: 0 00:30:41.585 Relative Write Latency: 0 00:30:41.585 Idle Power: Not Reported 00:30:41.585 Active Power: Not Reported 00:30:41.585 Non-Operational Permissive Mode: Not Supported 00:30:41.585 00:30:41.585 Health Information 00:30:41.585 ================== 00:30:41.585 Critical Warnings: 00:30:41.585 Available Spare Space: OK 00:30:41.585 Temperature: OK 00:30:41.585 Device Reliability: OK 00:30:41.585 Read Only: No 00:30:41.585 Volatile Memory Backup: OK 00:30:41.585 Current Temperature: 323 Kelvin (50 Celsius) 00:30:41.585 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:41.585 Available Spare: 0% 00:30:41.585 Available Spare Threshold: 0% 00:30:41.585 Life Percentage Used: 0% 00:30:41.585 Data Units Read: 8692 00:30:41.585 Data Units Written: 4239 00:30:41.585 Host Read Commands: 307937 00:30:41.585 Host Write Commands: 169090 00:30:41.585 Controller Busy Time: 0 minutes 00:30:41.585 Power Cycles: 0 00:30:41.585 Power On Hours: 0 hours 00:30:41.585 Unsafe Shutdowns: 0 00:30:41.585 Unrecoverable Media Errors: 0 00:30:41.585 Lifetime Error Log Entries: 0 00:30:41.585 Warning Temperature Time: 0 minutes 00:30:41.585 Critical Temperature Time: 0 minutes 00:30:41.585 00:30:41.585 Number of Queues 00:30:41.585 ================ 00:30:41.585 Number of I/O Submission Queues: 64 00:30:41.585 Number of I/O Completion Queues: 64 00:30:41.585 00:30:41.585 ZNS Specific Controller Data 00:30:41.585 ============================ 00:30:41.585 Zone Append Size Limit: 0 00:30:41.585 00:30:41.585 00:30:41.585 Active Namespaces 00:30:41.585 ================= 00:30:41.585 Namespace ID:1 00:30:41.585 Error Recovery Timeout: Unlimited 00:30:41.585 Command Set Identifier: NVM (00h) 00:30:41.585 Deallocate: Supported 00:30:41.585 Deallocated/Unwritten Error: Supported 00:30:41.585 Deallocated Read Value: All 0x00 00:30:41.585 Deallocate in Write Zeroes: Not Supported 00:30:41.585 Deallocated Guard Field: 0xFFFF 00:30:41.585 Flush: Supported 00:30:41.585 Reservation: Not Supported 00:30:41.585 Namespace Sharing Capabilities: Private 00:30:41.585 Size (in LBAs): 1310720 (5GiB) 00:30:41.585 Capacity (in LBAs): 1310720 (5GiB) 00:30:41.585 Utilization (in LBAs): 1310720 (5GiB) 00:30:41.585 Thin Provisioning: Not Supported 00:30:41.585 Per-NS Atomic Units: No 00:30:41.585 Maximum Single Source Range Length: 128 00:30:41.585 Maximum Copy Length: 128 00:30:41.585 Maximum Source Range Count: 128 00:30:41.585 NGUID/EUI64 Never Reused: No 00:30:41.585 Namespace Write Protected: No 00:30:41.585 Number of LBA Formats: 8 00:30:41.585 Current LBA Format: LBA Format #04 00:30:41.585 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:41.585 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:41.585 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:41.585 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:41.585 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:41.585 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:41.585 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:41.585 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:41.585 00:30:41.585 00:30:41.585 real 0m0.702s 00:30:41.585 user 0m0.309s 00:30:41.585 sys 0m0.288s 00:30:41.585 17:09:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:41.585 17:09:30 -- common/autotest_common.sh@10 -- # set +x 00:30:41.585 ************************************ 00:30:41.585 END TEST nvme_identify 00:30:41.585 ************************************ 00:30:41.585 17:09:30 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:30:41.585 17:09:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:41.585 17:09:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:41.585 17:09:30 -- common/autotest_common.sh@10 -- # set +x 00:30:41.585 ************************************ 00:30:41.585 START TEST nvme_perf 00:30:41.585 ************************************ 00:30:41.585 17:09:30 -- common/autotest_common.sh@1114 -- # nvme_perf 00:30:41.585 17:09:30 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:30:42.963 Initializing NVMe Controllers 00:30:42.963 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:42.963 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:42.963 Initialization complete. Launching workers. 00:30:42.963 ======================================================== 00:30:42.963 Latency(us) 00:30:42.963 Device Information : IOPS MiB/s Average min max 00:30:42.963 PCIE (0000:00:06.0) NSID 1 from core 0: 54883.02 643.16 2333.32 1358.97 6893.87 00:30:42.963 ======================================================== 00:30:42.963 Total : 54883.02 643.16 2333.32 1358.97 6893.87 00:30:42.963 00:30:42.963 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:42.963 ================================================================================= 00:30:42.963 1.00000% : 1496.902us 00:30:42.963 10.00000% : 1683.084us 00:30:42.963 25.00000% : 1921.396us 00:30:42.963 50.00000% : 2308.655us 00:30:42.963 75.00000% : 2681.018us 00:30:42.963 90.00000% : 2949.120us 00:30:42.963 95.00000% : 3291.695us 00:30:42.963 98.00000% : 3574.691us 00:30:42.963 99.00000% : 3723.636us 00:30:42.963 99.50000% : 4081.105us 00:30:42.963 99.90000% : 5213.091us 00:30:42.963 99.99000% : 6732.335us 00:30:42.963 99.99900% : 6911.069us 00:30:42.963 99.99990% : 6911.069us 00:30:42.963 99.99999% : 6911.069us 00:30:42.963 00:30:42.963 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:42.963 ============================================================================== 00:30:42.963 Range in us Cumulative IO count 00:30:42.963 1355.404 - 1362.851: 0.0036% ( 2) 00:30:42.963 1362.851 - 1370.298: 0.0091% ( 3) 00:30:42.963 1370.298 - 1377.745: 0.0127% ( 2) 00:30:42.963 1377.745 - 1385.193: 0.0164% ( 2) 00:30:42.963 1385.193 - 1392.640: 0.0255% ( 5) 00:30:42.963 1392.640 - 1400.087: 0.0510% ( 14) 00:30:42.963 1400.087 - 1407.535: 0.0819% ( 17) 00:30:42.963 1407.535 - 1414.982: 0.1129% ( 17) 00:30:42.963 1414.982 - 1422.429: 0.1420% ( 16) 00:30:42.963 1422.429 - 1429.876: 0.1785% ( 20) 00:30:42.963 1429.876 - 1437.324: 0.2386% ( 33) 00:30:42.964 1437.324 - 1444.771: 0.2987% ( 33) 00:30:42.964 1444.771 - 1452.218: 0.3679% ( 38) 00:30:42.964 1452.218 - 1459.665: 0.4607% ( 51) 00:30:42.964 1459.665 - 1467.113: 0.5518% ( 50) 00:30:42.964 1467.113 - 1474.560: 0.6720% ( 66) 00:30:42.964 1474.560 - 1482.007: 0.8268% ( 85) 00:30:42.964 1482.007 - 1489.455: 0.9816% ( 85) 00:30:42.964 1489.455 - 1496.902: 1.1782% ( 108) 00:30:42.964 1496.902 - 1504.349: 1.3713% ( 106) 00:30:42.964 1504.349 - 1511.796: 1.6117% ( 132) 00:30:42.964 1511.796 - 1519.244: 1.8575% ( 135) 00:30:42.964 1519.244 - 1526.691: 2.1270% ( 148) 00:30:42.964 1526.691 - 1534.138: 2.3784% ( 138) 00:30:42.964 1534.138 - 1541.585: 2.6807% ( 166) 00:30:42.964 1541.585 - 1549.033: 2.9502% ( 148) 00:30:42.964 1549.033 - 1556.480: 3.2233% ( 150) 00:30:42.964 1556.480 - 1563.927: 3.5493% ( 179) 00:30:42.964 1563.927 - 1571.375: 3.8862% ( 185) 00:30:42.964 1571.375 - 1578.822: 4.2268% ( 187) 00:30:42.964 1578.822 - 1586.269: 4.5819% ( 195) 00:30:42.964 1586.269 - 1593.716: 4.9661% ( 211) 00:30:42.964 1593.716 - 1601.164: 5.3467% ( 209) 00:30:42.964 1601.164 - 1608.611: 5.7419% ( 217) 00:30:42.964 1608.611 - 1616.058: 6.1462% ( 222) 00:30:42.964 1616.058 - 1623.505: 6.5487% ( 221) 00:30:42.964 1623.505 - 1630.953: 6.9712% ( 232) 00:30:42.964 1630.953 - 1638.400: 7.3900% ( 230) 00:30:42.964 1638.400 - 1645.847: 7.8234% ( 238) 00:30:42.964 1645.847 - 1653.295: 8.2587% ( 239) 00:30:42.964 1653.295 - 1660.742: 8.6994% ( 242) 00:30:42.964 1660.742 - 1668.189: 9.1364% ( 240) 00:30:42.964 1668.189 - 1675.636: 9.5990% ( 254) 00:30:42.964 1675.636 - 1683.084: 10.0397% ( 242) 00:30:42.964 1683.084 - 1690.531: 10.4932% ( 249) 00:30:42.964 1690.531 - 1697.978: 10.9594% ( 256) 00:30:42.964 1697.978 - 1705.425: 11.4438% ( 266) 00:30:42.964 1705.425 - 1712.873: 11.9118% ( 257) 00:30:42.964 1712.873 - 1720.320: 12.3525% ( 242) 00:30:42.964 1720.320 - 1727.767: 12.8515% ( 274) 00:30:42.964 1727.767 - 1735.215: 13.3195% ( 257) 00:30:42.964 1735.215 - 1742.662: 13.7857% ( 256) 00:30:42.964 1742.662 - 1750.109: 14.2537% ( 257) 00:30:42.964 1750.109 - 1757.556: 14.7418% ( 268) 00:30:42.964 1757.556 - 1765.004: 15.2043% ( 254) 00:30:42.964 1765.004 - 1772.451: 15.6869% ( 265) 00:30:42.964 1772.451 - 1779.898: 16.1695% ( 265) 00:30:42.964 1779.898 - 1787.345: 16.6521% ( 265) 00:30:42.964 1787.345 - 1794.793: 17.1165% ( 255) 00:30:42.964 1794.793 - 1802.240: 17.6227% ( 278) 00:30:42.964 1802.240 - 1809.687: 18.0908% ( 257) 00:30:42.964 1809.687 - 1817.135: 18.5916% ( 275) 00:30:42.964 1817.135 - 1824.582: 19.0814% ( 269) 00:30:42.964 1824.582 - 1832.029: 19.5476% ( 256) 00:30:42.964 1832.029 - 1839.476: 20.0375% ( 269) 00:30:42.964 1839.476 - 1846.924: 20.5237% ( 267) 00:30:42.964 1846.924 - 1854.371: 21.0100% ( 267) 00:30:42.964 1854.371 - 1861.818: 21.5108% ( 275) 00:30:42.964 1861.818 - 1869.265: 21.9806% ( 258) 00:30:42.964 1869.265 - 1876.713: 22.4669% ( 267) 00:30:42.964 1876.713 - 1884.160: 22.9658% ( 274) 00:30:42.964 1884.160 - 1891.607: 23.4248% ( 252) 00:30:42.964 1891.607 - 1899.055: 23.9328% ( 279) 00:30:42.964 1899.055 - 1906.502: 24.4118% ( 263) 00:30:42.964 1906.502 - 1921.396: 25.3915% ( 538) 00:30:42.964 1921.396 - 1936.291: 26.3804% ( 543) 00:30:42.964 1936.291 - 1951.185: 27.3547% ( 535) 00:30:42.964 1951.185 - 1966.080: 28.2980% ( 518) 00:30:42.964 1966.080 - 1980.975: 29.2923% ( 546) 00:30:42.964 1980.975 - 1995.869: 30.2630% ( 533) 00:30:42.964 1995.869 - 2010.764: 31.2482% ( 541) 00:30:42.964 2010.764 - 2025.658: 32.2261% ( 537) 00:30:42.964 2025.658 - 2040.553: 33.2131% ( 542) 00:30:42.964 2040.553 - 2055.447: 34.2093% ( 547) 00:30:42.964 2055.447 - 2070.342: 35.1617% ( 523) 00:30:42.964 2070.342 - 2085.236: 36.1178% ( 525) 00:30:42.964 2085.236 - 2100.131: 37.1030% ( 541) 00:30:42.964 2100.131 - 2115.025: 38.0554% ( 523) 00:30:42.964 2115.025 - 2129.920: 39.0261% ( 533) 00:30:42.964 2129.920 - 2144.815: 39.9767% ( 522) 00:30:42.964 2144.815 - 2159.709: 40.9364% ( 527) 00:30:42.964 2159.709 - 2174.604: 41.9143% ( 537) 00:30:42.964 2174.604 - 2189.498: 42.8722% ( 526) 00:30:42.964 2189.498 - 2204.393: 43.8611% ( 543) 00:30:42.964 2204.393 - 2219.287: 44.8299% ( 532) 00:30:42.964 2219.287 - 2234.182: 45.8078% ( 537) 00:30:42.964 2234.182 - 2249.076: 46.7712% ( 529) 00:30:42.964 2249.076 - 2263.971: 47.7309% ( 527) 00:30:42.964 2263.971 - 2278.865: 48.7143% ( 540) 00:30:42.964 2278.865 - 2293.760: 49.6467% ( 512) 00:30:42.964 2293.760 - 2308.655: 50.6210% ( 535) 00:30:42.964 2308.655 - 2323.549: 51.5953% ( 535) 00:30:42.964 2323.549 - 2338.444: 52.5586% ( 529) 00:30:42.964 2338.444 - 2353.338: 53.5202% ( 528) 00:30:42.964 2353.338 - 2368.233: 54.4708% ( 522) 00:30:42.964 2368.233 - 2383.127: 55.4469% ( 536) 00:30:42.964 2383.127 - 2398.022: 56.4194% ( 534) 00:30:42.964 2398.022 - 2412.916: 57.4210% ( 550) 00:30:42.964 2412.916 - 2427.811: 58.3825% ( 528) 00:30:42.964 2427.811 - 2442.705: 59.3513% ( 532) 00:30:42.964 2442.705 - 2457.600: 60.3384% ( 542) 00:30:42.964 2457.600 - 2472.495: 61.3272% ( 543) 00:30:42.964 2472.495 - 2487.389: 62.3215% ( 546) 00:30:42.964 2487.389 - 2502.284: 63.2740% ( 523) 00:30:42.964 2502.284 - 2517.178: 64.2756% ( 550) 00:30:42.964 2517.178 - 2532.073: 65.2644% ( 543) 00:30:42.964 2532.073 - 2546.967: 66.2533% ( 543) 00:30:42.964 2546.967 - 2561.862: 67.2221% ( 532) 00:30:42.964 2561.862 - 2576.756: 68.1873% ( 530) 00:30:42.964 2576.756 - 2591.651: 69.1780% ( 544) 00:30:42.964 2591.651 - 2606.545: 70.1723% ( 546) 00:30:42.964 2606.545 - 2621.440: 71.1211% ( 521) 00:30:42.964 2621.440 - 2636.335: 72.0826% ( 528) 00:30:42.964 2636.335 - 2651.229: 73.0915% ( 554) 00:30:42.964 2651.229 - 2666.124: 74.0785% ( 542) 00:30:42.964 2666.124 - 2681.018: 75.0565% ( 537) 00:30:42.964 2681.018 - 2695.913: 76.0362% ( 538) 00:30:42.964 2695.913 - 2710.807: 77.0251% ( 543) 00:30:42.964 2710.807 - 2725.702: 77.9921% ( 531) 00:30:42.964 2725.702 - 2740.596: 78.9663% ( 535) 00:30:42.964 2740.596 - 2755.491: 79.9352% ( 532) 00:30:42.964 2755.491 - 2770.385: 80.9003% ( 530) 00:30:42.964 2770.385 - 2785.280: 81.8837% ( 540) 00:30:42.964 2785.280 - 2800.175: 82.8180% ( 513) 00:30:42.964 2800.175 - 2815.069: 83.7631% ( 519) 00:30:42.964 2815.069 - 2829.964: 84.6609% ( 493) 00:30:42.964 2829.964 - 2844.858: 85.5132% ( 468) 00:30:42.964 2844.858 - 2859.753: 86.3108% ( 438) 00:30:42.964 2859.753 - 2874.647: 87.0702% ( 417) 00:30:42.964 2874.647 - 2889.542: 87.7713% ( 385) 00:30:42.964 2889.542 - 2904.436: 88.4087% ( 350) 00:30:42.964 2904.436 - 2919.331: 89.0297% ( 341) 00:30:42.964 2919.331 - 2934.225: 89.5651% ( 294) 00:30:42.964 2934.225 - 2949.120: 90.0586% ( 271) 00:30:42.964 2949.120 - 2964.015: 90.4902% ( 237) 00:30:42.964 2964.015 - 2978.909: 90.9018% ( 226) 00:30:42.964 2978.909 - 2993.804: 91.2806% ( 208) 00:30:42.964 2993.804 - 3008.698: 91.5938% ( 172) 00:30:42.964 3008.698 - 3023.593: 91.8834% ( 159) 00:30:42.964 3023.593 - 3038.487: 92.1511% ( 147) 00:30:42.964 3038.487 - 3053.382: 92.3933% ( 133) 00:30:42.964 3053.382 - 3068.276: 92.6027% ( 115) 00:30:42.964 3068.276 - 3083.171: 92.8067% ( 112) 00:30:42.964 3083.171 - 3098.065: 92.9961% ( 104) 00:30:42.964 3098.065 - 3112.960: 93.1782% ( 100) 00:30:42.964 3112.960 - 3127.855: 93.3403% ( 89) 00:30:42.964 3127.855 - 3142.749: 93.5096% ( 93) 00:30:42.964 3142.749 - 3157.644: 93.6772% ( 92) 00:30:42.964 3157.644 - 3172.538: 93.8356% ( 87) 00:30:42.964 3172.538 - 3187.433: 93.9995% ( 90) 00:30:42.964 3187.433 - 3202.327: 94.1597% ( 88) 00:30:42.964 3202.327 - 3217.222: 94.3236% ( 90) 00:30:42.964 3217.222 - 3232.116: 94.4839% ( 88) 00:30:42.964 3232.116 - 3247.011: 94.6442% ( 88) 00:30:42.964 3247.011 - 3261.905: 94.8044% ( 88) 00:30:42.964 3261.905 - 3276.800: 94.9647% ( 88) 00:30:42.964 3276.800 - 3291.695: 95.1176% ( 84) 00:30:42.964 3291.695 - 3306.589: 95.2743% ( 86) 00:30:42.964 3306.589 - 3321.484: 95.4291% ( 85) 00:30:42.964 3321.484 - 3336.378: 95.5857% ( 86) 00:30:42.964 3336.378 - 3351.273: 95.7405% ( 85) 00:30:42.964 3351.273 - 3366.167: 95.8971% ( 86) 00:30:42.964 3366.167 - 3381.062: 96.0519% ( 85) 00:30:42.964 3381.062 - 3395.956: 96.2103% ( 87) 00:30:42.964 3395.956 - 3410.851: 96.3687% ( 87) 00:30:42.964 3410.851 - 3425.745: 96.5272% ( 87) 00:30:42.964 3425.745 - 3440.640: 96.6674% ( 77) 00:30:42.964 3440.640 - 3455.535: 96.8222% ( 85) 00:30:42.964 3455.535 - 3470.429: 96.9806% ( 87) 00:30:42.964 3470.429 - 3485.324: 97.1354% ( 85) 00:30:42.964 3485.324 - 3500.218: 97.2920% ( 86) 00:30:42.964 3500.218 - 3515.113: 97.4359% ( 79) 00:30:42.964 3515.113 - 3530.007: 97.5870% ( 83) 00:30:42.964 3530.007 - 3544.902: 97.7364% ( 82) 00:30:42.964 3544.902 - 3559.796: 97.8766% ( 77) 00:30:42.964 3559.796 - 3574.691: 98.0259% ( 82) 00:30:42.964 3574.691 - 3589.585: 98.1625% ( 75) 00:30:42.964 3589.585 - 3604.480: 98.3046% ( 78) 00:30:42.964 3604.480 - 3619.375: 98.4302% ( 69) 00:30:42.964 3619.375 - 3634.269: 98.5559% ( 69) 00:30:42.964 3634.269 - 3649.164: 98.6633% ( 59) 00:30:42.964 3649.164 - 3664.058: 98.7671% ( 57) 00:30:42.964 3664.058 - 3678.953: 98.8527% ( 47) 00:30:42.964 3678.953 - 3693.847: 98.9274% ( 41) 00:30:42.964 3693.847 - 3708.742: 98.9929% ( 36) 00:30:42.964 3708.742 - 3723.636: 99.0476% ( 30) 00:30:42.964 3723.636 - 3738.531: 99.1004% ( 29) 00:30:42.964 3738.531 - 3753.425: 99.1459% ( 25) 00:30:42.965 3753.425 - 3768.320: 99.1823% ( 20) 00:30:42.965 3768.320 - 3783.215: 99.2133% ( 17) 00:30:42.965 3783.215 - 3798.109: 99.2406% ( 15) 00:30:42.965 3798.109 - 3813.004: 99.2679% ( 15) 00:30:42.965 3813.004 - 3842.793: 99.3098% ( 23) 00:30:42.965 3842.793 - 3872.582: 99.3444% ( 19) 00:30:42.965 3872.582 - 3902.371: 99.3754% ( 17) 00:30:42.965 3902.371 - 3932.160: 99.4027% ( 15) 00:30:42.965 3932.160 - 3961.949: 99.4264% ( 13) 00:30:42.965 3961.949 - 3991.738: 99.4500% ( 13) 00:30:42.965 3991.738 - 4021.527: 99.4755% ( 14) 00:30:42.965 4021.527 - 4051.316: 99.4992% ( 13) 00:30:42.965 4051.316 - 4081.105: 99.5229% ( 13) 00:30:42.965 4081.105 - 4110.895: 99.5447% ( 12) 00:30:42.965 4110.895 - 4140.684: 99.5702% ( 14) 00:30:42.965 4140.684 - 4170.473: 99.5957% ( 14) 00:30:42.965 4170.473 - 4200.262: 99.6212% ( 14) 00:30:42.965 4200.262 - 4230.051: 99.6485% ( 15) 00:30:42.965 4230.051 - 4259.840: 99.6758% ( 15) 00:30:42.965 4259.840 - 4289.629: 99.7013% ( 14) 00:30:42.965 4289.629 - 4319.418: 99.7268% ( 14) 00:30:42.965 4319.418 - 4349.207: 99.7505% ( 13) 00:30:42.965 4349.207 - 4378.996: 99.7705% ( 11) 00:30:42.965 4378.996 - 4408.785: 99.7924% ( 12) 00:30:42.965 4408.785 - 4438.575: 99.8088% ( 9) 00:30:42.965 4438.575 - 4468.364: 99.8215% ( 7) 00:30:42.965 4468.364 - 4498.153: 99.8343% ( 7) 00:30:42.965 4498.153 - 4527.942: 99.8416% ( 4) 00:30:42.965 4527.942 - 4557.731: 99.8452% ( 2) 00:30:42.965 4557.731 - 4587.520: 99.8507% ( 3) 00:30:42.965 4587.520 - 4617.309: 99.8561% ( 3) 00:30:42.965 4617.309 - 4647.098: 99.8616% ( 3) 00:30:42.965 4647.098 - 4676.887: 99.8671% ( 3) 00:30:42.965 4676.887 - 4706.676: 99.8725% ( 3) 00:30:42.965 4706.676 - 4736.465: 99.8762% ( 2) 00:30:42.965 4736.465 - 4766.255: 99.8780% ( 1) 00:30:42.965 4766.255 - 4796.044: 99.8798% ( 1) 00:30:42.965 4796.044 - 4825.833: 99.8816% ( 1) 00:30:42.965 4825.833 - 4855.622: 99.8834% ( 1) 00:30:42.965 4885.411 - 4915.200: 99.8853% ( 1) 00:30:42.965 4915.200 - 4944.989: 99.8871% ( 1) 00:30:42.965 4944.989 - 4974.778: 99.8889% ( 1) 00:30:42.965 4974.778 - 5004.567: 99.8907% ( 1) 00:30:42.965 5004.567 - 5034.356: 99.8926% ( 1) 00:30:42.965 5064.145 - 5093.935: 99.8944% ( 1) 00:30:42.965 5093.935 - 5123.724: 99.8962% ( 1) 00:30:42.965 5123.724 - 5153.513: 99.8980% ( 1) 00:30:42.965 5153.513 - 5183.302: 99.8998% ( 1) 00:30:42.965 5183.302 - 5213.091: 99.9017% ( 1) 00:30:42.965 5213.091 - 5242.880: 99.9035% ( 1) 00:30:42.965 5242.880 - 5272.669: 99.9053% ( 1) 00:30:42.965 5272.669 - 5302.458: 99.9071% ( 1) 00:30:42.965 5302.458 - 5332.247: 99.9089% ( 1) 00:30:42.965 5332.247 - 5362.036: 99.9108% ( 1) 00:30:42.965 5362.036 - 5391.825: 99.9126% ( 1) 00:30:42.965 5391.825 - 5421.615: 99.9144% ( 1) 00:30:42.965 5421.615 - 5451.404: 99.9162% ( 1) 00:30:42.965 5451.404 - 5481.193: 99.9181% ( 1) 00:30:42.965 5481.193 - 5510.982: 99.9199% ( 1) 00:30:42.965 5510.982 - 5540.771: 99.9217% ( 1) 00:30:42.965 5540.771 - 5570.560: 99.9235% ( 1) 00:30:42.965 5570.560 - 5600.349: 99.9253% ( 1) 00:30:42.965 5600.349 - 5630.138: 99.9272% ( 1) 00:30:42.965 5630.138 - 5659.927: 99.9290% ( 1) 00:30:42.965 5689.716 - 5719.505: 99.9308% ( 1) 00:30:42.965 5719.505 - 5749.295: 99.9344% ( 2) 00:30:42.965 5779.084 - 5808.873: 99.9363% ( 1) 00:30:42.965 5808.873 - 5838.662: 99.9381% ( 1) 00:30:42.965 5838.662 - 5868.451: 99.9399% ( 1) 00:30:42.965 5868.451 - 5898.240: 99.9417% ( 1) 00:30:42.965 5898.240 - 5928.029: 99.9435% ( 1) 00:30:42.965 5928.029 - 5957.818: 99.9454% ( 1) 00:30:42.965 5957.818 - 5987.607: 99.9472% ( 1) 00:30:42.965 5987.607 - 6017.396: 99.9490% ( 1) 00:30:42.965 6017.396 - 6047.185: 99.9508% ( 1) 00:30:42.965 6047.185 - 6076.975: 99.9527% ( 1) 00:30:42.965 6076.975 - 6106.764: 99.9545% ( 1) 00:30:42.965 6106.764 - 6136.553: 99.9563% ( 1) 00:30:42.965 6136.553 - 6166.342: 99.9581% ( 1) 00:30:42.965 6166.342 - 6196.131: 99.9599% ( 1) 00:30:42.965 6196.131 - 6225.920: 99.9618% ( 1) 00:30:42.965 6225.920 - 6255.709: 99.9636% ( 1) 00:30:42.965 6255.709 - 6285.498: 99.9654% ( 1) 00:30:42.965 6285.498 - 6315.287: 99.9672% ( 1) 00:30:42.965 6315.287 - 6345.076: 99.9690% ( 1) 00:30:42.965 6345.076 - 6374.865: 99.9709% ( 1) 00:30:42.965 6374.865 - 6404.655: 99.9727% ( 1) 00:30:42.965 6404.655 - 6434.444: 99.9745% ( 1) 00:30:42.965 6434.444 - 6464.233: 99.9763% ( 1) 00:30:42.965 6464.233 - 6494.022: 99.9781% ( 1) 00:30:42.965 6494.022 - 6523.811: 99.9800% ( 1) 00:30:42.965 6553.600 - 6583.389: 99.9818% ( 1) 00:30:42.965 6583.389 - 6613.178: 99.9836% ( 1) 00:30:42.965 6613.178 - 6642.967: 99.9854% ( 1) 00:30:42.965 6642.967 - 6672.756: 99.9873% ( 1) 00:30:42.965 6672.756 - 6702.545: 99.9891% ( 1) 00:30:42.965 6702.545 - 6732.335: 99.9909% ( 1) 00:30:42.965 6732.335 - 6762.124: 99.9945% ( 2) 00:30:42.965 6762.124 - 6791.913: 99.9964% ( 1) 00:30:42.965 6791.913 - 6821.702: 99.9982% ( 1) 00:30:42.965 6881.280 - 6911.069: 100.0000% ( 1) 00:30:42.965 00:30:42.965 17:09:31 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:30:44.339 Initializing NVMe Controllers 00:30:44.339 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:44.339 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:44.339 Initialization complete. Launching workers. 00:30:44.339 ======================================================== 00:30:44.339 Latency(us) 00:30:44.339 Device Information : IOPS MiB/s Average min max 00:30:44.339 PCIE (0000:00:06.0) NSID 1 from core 0: 60136.00 704.72 2127.94 1128.91 8393.71 00:30:44.339 ======================================================== 00:30:44.339 Total : 60136.00 704.72 2127.94 1128.91 8393.71 00:30:44.339 00:30:44.339 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:44.339 ================================================================================= 00:30:44.339 1.00000% : 1534.138us 00:30:44.339 10.00000% : 1809.687us 00:30:44.340 25.00000% : 1951.185us 00:30:44.340 50.00000% : 2100.131us 00:30:44.340 75.00000% : 2278.865us 00:30:44.340 90.00000% : 2502.284us 00:30:44.340 95.00000% : 2681.018us 00:30:44.340 98.00000% : 2904.436us 00:30:44.340 99.00000% : 3038.487us 00:30:44.340 99.50000% : 3291.695us 00:30:44.340 99.90000% : 4110.895us 00:30:44.340 99.99000% : 5421.615us 00:30:44.340 99.99900% : 8400.524us 00:30:44.340 99.99990% : 8400.524us 00:30:44.340 99.99999% : 8400.524us 00:30:44.340 00:30:44.340 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:44.340 ============================================================================== 00:30:44.340 Range in us Cumulative IO count 00:30:44.340 1124.538 - 1131.985: 0.0017% ( 1) 00:30:44.340 1139.433 - 1146.880: 0.0033% ( 1) 00:30:44.340 1154.327 - 1161.775: 0.0050% ( 1) 00:30:44.340 1228.800 - 1236.247: 0.0067% ( 1) 00:30:44.340 1243.695 - 1251.142: 0.0100% ( 2) 00:30:44.340 1251.142 - 1258.589: 0.0133% ( 2) 00:30:44.340 1258.589 - 1266.036: 0.0183% ( 3) 00:30:44.340 1266.036 - 1273.484: 0.0200% ( 1) 00:30:44.340 1280.931 - 1288.378: 0.0283% ( 5) 00:30:44.340 1288.378 - 1295.825: 0.0349% ( 4) 00:30:44.340 1303.273 - 1310.720: 0.0399% ( 3) 00:30:44.340 1310.720 - 1318.167: 0.0432% ( 2) 00:30:44.340 1318.167 - 1325.615: 0.0449% ( 1) 00:30:44.340 1325.615 - 1333.062: 0.0532% ( 5) 00:30:44.340 1333.062 - 1340.509: 0.0549% ( 1) 00:30:44.340 1340.509 - 1347.956: 0.0615% ( 4) 00:30:44.340 1347.956 - 1355.404: 0.0649% ( 2) 00:30:44.340 1355.404 - 1362.851: 0.0732% ( 5) 00:30:44.340 1362.851 - 1370.298: 0.0848% ( 7) 00:30:44.340 1370.298 - 1377.745: 0.0964% ( 7) 00:30:44.340 1377.745 - 1385.193: 0.1064% ( 6) 00:30:44.340 1385.193 - 1392.640: 0.1114% ( 3) 00:30:44.340 1392.640 - 1400.087: 0.1280% ( 10) 00:30:44.340 1400.087 - 1407.535: 0.1430% ( 9) 00:30:44.340 1407.535 - 1414.982: 0.1546% ( 7) 00:30:44.340 1414.982 - 1422.429: 0.1979% ( 26) 00:30:44.340 1422.429 - 1429.876: 0.2095% ( 7) 00:30:44.340 1429.876 - 1437.324: 0.2278% ( 11) 00:30:44.340 1437.324 - 1444.771: 0.2411% ( 8) 00:30:44.340 1444.771 - 1452.218: 0.2644% ( 14) 00:30:44.340 1452.218 - 1459.665: 0.2794% ( 9) 00:30:44.340 1459.665 - 1467.113: 0.3076% ( 17) 00:30:44.340 1467.113 - 1474.560: 0.3259% ( 11) 00:30:44.340 1474.560 - 1482.007: 0.3725% ( 28) 00:30:44.340 1482.007 - 1489.455: 0.4274% ( 33) 00:30:44.340 1489.455 - 1496.902: 0.5138% ( 52) 00:30:44.340 1496.902 - 1504.349: 0.6618% ( 89) 00:30:44.340 1504.349 - 1511.796: 0.7383% ( 46) 00:30:44.340 1511.796 - 1519.244: 0.8215% ( 50) 00:30:44.340 1519.244 - 1526.691: 0.9262% ( 63) 00:30:44.340 1526.691 - 1534.138: 1.0493% ( 74) 00:30:44.340 1534.138 - 1541.585: 1.0992% ( 30) 00:30:44.340 1541.585 - 1549.033: 1.1491% ( 30) 00:30:44.340 1549.033 - 1556.480: 1.2056% ( 34) 00:30:44.340 1556.480 - 1563.927: 1.2754% ( 42) 00:30:44.340 1563.927 - 1571.375: 1.3469% ( 43) 00:30:44.340 1571.375 - 1578.822: 1.4301% ( 50) 00:30:44.340 1578.822 - 1586.269: 1.5531% ( 74) 00:30:44.340 1586.269 - 1593.716: 1.6313% ( 47) 00:30:44.340 1593.716 - 1601.164: 1.7294% ( 59) 00:30:44.340 1601.164 - 1608.611: 1.8442% ( 69) 00:30:44.340 1608.611 - 1616.058: 1.9622% ( 71) 00:30:44.340 1616.058 - 1623.505: 2.1069% ( 87) 00:30:44.340 1623.505 - 1630.953: 2.3380% ( 139) 00:30:44.340 1630.953 - 1638.400: 2.5276% ( 114) 00:30:44.340 1638.400 - 1645.847: 2.6540% ( 76) 00:30:44.340 1645.847 - 1653.295: 2.8386% ( 111) 00:30:44.340 1653.295 - 1660.742: 2.9999% ( 97) 00:30:44.340 1660.742 - 1668.189: 3.1745% ( 105) 00:30:44.340 1668.189 - 1675.636: 3.3890% ( 129) 00:30:44.340 1675.636 - 1683.084: 3.6883% ( 180) 00:30:44.340 1683.084 - 1690.531: 3.8879% ( 120) 00:30:44.340 1690.531 - 1697.978: 4.0608% ( 104) 00:30:44.340 1697.978 - 1705.425: 4.3185% ( 155) 00:30:44.340 1705.425 - 1712.873: 4.5347% ( 130) 00:30:44.340 1712.873 - 1720.320: 4.7226% ( 113) 00:30:44.340 1720.320 - 1727.767: 4.9904% ( 161) 00:30:44.340 1727.767 - 1735.215: 5.2581% ( 161) 00:30:44.340 1735.215 - 1742.662: 5.5241% ( 160) 00:30:44.340 1742.662 - 1750.109: 5.9798% ( 274) 00:30:44.340 1750.109 - 1757.556: 6.4803% ( 301) 00:30:44.340 1757.556 - 1765.004: 6.8728% ( 236) 00:30:44.340 1765.004 - 1772.451: 7.3018% ( 258) 00:30:44.340 1772.451 - 1779.898: 7.8306% ( 318) 00:30:44.340 1779.898 - 1787.345: 8.3245% ( 297) 00:30:44.340 1787.345 - 1794.793: 9.1127% ( 474) 00:30:44.340 1794.793 - 1802.240: 9.6731% ( 337) 00:30:44.340 1802.240 - 1809.687: 10.2035% ( 319) 00:30:44.340 1809.687 - 1817.135: 10.8936% ( 415) 00:30:44.340 1817.135 - 1824.582: 11.5571% ( 399) 00:30:44.340 1824.582 - 1832.029: 12.2855% ( 438) 00:30:44.340 1832.029 - 1839.476: 13.0770% ( 476) 00:30:44.340 1839.476 - 1846.924: 13.8752% ( 480) 00:30:44.340 1846.924 - 1854.371: 14.6368% ( 458) 00:30:44.340 1854.371 - 1861.818: 15.4300% ( 477) 00:30:44.340 1861.818 - 1869.265: 16.1600% ( 439) 00:30:44.340 1869.265 - 1876.713: 16.9782% ( 492) 00:30:44.340 1876.713 - 1884.160: 17.8579% ( 529) 00:30:44.340 1884.160 - 1891.607: 18.7359% ( 528) 00:30:44.340 1891.607 - 1899.055: 19.6970% ( 578) 00:30:44.340 1899.055 - 1906.502: 20.6665% ( 583) 00:30:44.340 1906.502 - 1921.396: 22.8399% ( 1307) 00:30:44.340 1921.396 - 1936.291: 24.9418% ( 1264) 00:30:44.340 1936.291 - 1951.185: 26.8940% ( 1174) 00:30:44.340 1951.185 - 1966.080: 29.3568% ( 1481) 00:30:44.340 1966.080 - 1980.975: 31.6084% ( 1354) 00:30:44.340 1980.975 - 1995.869: 33.8932% ( 1374) 00:30:44.340 1995.869 - 2010.764: 36.1381% ( 1350) 00:30:44.340 2010.764 - 2025.658: 38.4844% ( 1411) 00:30:44.340 2025.658 - 2040.553: 41.2382% ( 1656) 00:30:44.340 2040.553 - 2055.447: 43.8706% ( 1583) 00:30:44.340 2055.447 - 2070.342: 46.7507% ( 1732) 00:30:44.340 2070.342 - 2085.236: 49.2201% ( 1485) 00:30:44.340 2085.236 - 2100.131: 51.7677% ( 1532) 00:30:44.340 2100.131 - 2115.025: 54.3651% ( 1562) 00:30:44.340 2115.025 - 2129.920: 56.7780% ( 1451) 00:30:44.340 2129.920 - 2144.815: 59.2274% ( 1473) 00:30:44.340 2144.815 - 2159.709: 61.3260% ( 1262) 00:30:44.340 2159.709 - 2174.604: 63.4628% ( 1285) 00:30:44.340 2174.604 - 2189.498: 65.5448% ( 1252) 00:30:44.340 2189.498 - 2204.393: 67.5635% ( 1214) 00:30:44.340 2204.393 - 2219.287: 69.5457% ( 1192) 00:30:44.340 2219.287 - 2234.182: 71.2701% ( 1037) 00:30:44.340 2234.182 - 2249.076: 72.9546% ( 1013) 00:30:44.340 2249.076 - 2263.971: 74.8636% ( 1148) 00:30:44.340 2263.971 - 2278.865: 76.5764% ( 1030) 00:30:44.340 2278.865 - 2293.760: 78.0680% ( 897) 00:30:44.340 2293.760 - 2308.655: 79.4599% ( 837) 00:30:44.340 2308.655 - 2323.549: 80.8035% ( 808) 00:30:44.340 2323.549 - 2338.444: 81.9858% ( 711) 00:30:44.340 2338.444 - 2353.338: 83.1266% ( 686) 00:30:44.340 2353.338 - 2368.233: 84.0661% ( 565) 00:30:44.340 2368.233 - 2383.127: 84.9558% ( 535) 00:30:44.340 2383.127 - 2398.022: 85.8338% ( 528) 00:30:44.340 2398.022 - 2412.916: 86.5970% ( 459) 00:30:44.340 2412.916 - 2427.811: 87.2722% ( 406) 00:30:44.340 2427.811 - 2442.705: 87.9240% ( 392) 00:30:44.340 2442.705 - 2457.600: 88.5327% ( 366) 00:30:44.340 2457.600 - 2472.495: 89.1047% ( 344) 00:30:44.340 2472.495 - 2487.389: 89.6468% ( 326) 00:30:44.340 2487.389 - 2502.284: 90.1723% ( 316) 00:30:44.340 2502.284 - 2517.178: 90.7194% ( 329) 00:30:44.340 2517.178 - 2532.073: 91.2781% ( 336) 00:30:44.340 2532.073 - 2546.967: 91.7404% ( 278) 00:30:44.340 2546.967 - 2561.862: 92.1844% ( 267) 00:30:44.340 2561.862 - 2576.756: 92.6184% ( 261) 00:30:44.340 2576.756 - 2591.651: 93.0325% ( 249) 00:30:44.340 2591.651 - 2606.545: 93.4349% ( 242) 00:30:44.340 2606.545 - 2621.440: 93.8223% ( 233) 00:30:44.340 2621.440 - 2636.335: 94.1932% ( 223) 00:30:44.340 2636.335 - 2651.229: 94.5141% ( 193) 00:30:44.340 2651.229 - 2666.124: 94.8417% ( 197) 00:30:44.340 2666.124 - 2681.018: 95.1377% ( 178) 00:30:44.340 2681.018 - 2695.913: 95.4270% ( 174) 00:30:44.340 2695.913 - 2710.807: 95.7081% ( 169) 00:30:44.340 2710.807 - 2725.702: 95.9459% ( 143) 00:30:44.340 2725.702 - 2740.596: 96.1620% ( 130) 00:30:44.340 2740.596 - 2755.491: 96.3849% ( 134) 00:30:44.340 2755.491 - 2770.385: 96.5844% ( 120) 00:30:44.340 2770.385 - 2785.280: 96.7906% ( 124) 00:30:44.340 2785.280 - 2800.175: 96.9885% ( 119) 00:30:44.340 2800.175 - 2815.069: 97.1714% ( 110) 00:30:44.340 2815.069 - 2829.964: 97.3477% ( 106) 00:30:44.340 2829.964 - 2844.858: 97.5422% ( 117) 00:30:44.340 2844.858 - 2859.753: 97.6902% ( 89) 00:30:44.340 2859.753 - 2874.647: 97.8332% ( 86) 00:30:44.340 2874.647 - 2889.542: 97.9580% ( 75) 00:30:44.340 2889.542 - 2904.436: 98.0993% ( 85) 00:30:44.340 2904.436 - 2919.331: 98.2124% ( 68) 00:30:44.340 2919.331 - 2934.225: 98.3338% ( 73) 00:30:44.340 2934.225 - 2949.120: 98.4568% ( 74) 00:30:44.340 2949.120 - 2964.015: 98.5566% ( 60) 00:30:44.340 2964.015 - 2978.909: 98.6547% ( 59) 00:30:44.340 2978.909 - 2993.804: 98.7778% ( 74) 00:30:44.340 2993.804 - 3008.698: 98.8576% ( 48) 00:30:44.340 3008.698 - 3023.593: 98.9175% ( 36) 00:30:44.340 3023.593 - 3038.487: 99.0322% ( 69) 00:30:44.340 3038.487 - 3053.382: 99.1004% ( 41) 00:30:44.340 3053.382 - 3068.276: 99.1503% ( 30) 00:30:44.341 3068.276 - 3083.171: 99.1952% ( 27) 00:30:44.341 3083.171 - 3098.065: 99.2367% ( 25) 00:30:44.341 3098.065 - 3112.960: 99.2816% ( 27) 00:30:44.341 3112.960 - 3127.855: 99.3465% ( 39) 00:30:44.341 3127.855 - 3142.749: 99.3748% ( 17) 00:30:44.341 3142.749 - 3157.644: 99.3980% ( 14) 00:30:44.341 3157.644 - 3172.538: 99.4230% ( 15) 00:30:44.341 3172.538 - 3187.433: 99.4396% ( 10) 00:30:44.341 3187.433 - 3202.327: 99.4546% ( 9) 00:30:44.341 3202.327 - 3217.222: 99.4629% ( 5) 00:30:44.341 3217.222 - 3232.116: 99.4745% ( 7) 00:30:44.341 3232.116 - 3247.011: 99.4828% ( 5) 00:30:44.341 3247.011 - 3261.905: 99.4912% ( 5) 00:30:44.341 3261.905 - 3276.800: 99.4995% ( 5) 00:30:44.341 3276.800 - 3291.695: 99.5078% ( 5) 00:30:44.341 3291.695 - 3306.589: 99.5161% ( 5) 00:30:44.341 3306.589 - 3321.484: 99.5261% ( 6) 00:30:44.341 3321.484 - 3336.378: 99.5361% ( 6) 00:30:44.341 3336.378 - 3351.273: 99.5427% ( 4) 00:30:44.341 3351.273 - 3366.167: 99.5494% ( 4) 00:30:44.341 3366.167 - 3381.062: 99.5560% ( 4) 00:30:44.341 3381.062 - 3395.956: 99.5627% ( 4) 00:30:44.341 3395.956 - 3410.851: 99.5693% ( 4) 00:30:44.341 3410.851 - 3425.745: 99.5743% ( 3) 00:30:44.341 3425.745 - 3440.640: 99.5776% ( 2) 00:30:44.341 3440.640 - 3455.535: 99.5843% ( 4) 00:30:44.341 3455.535 - 3470.429: 99.5943% ( 6) 00:30:44.341 3470.429 - 3485.324: 99.5992% ( 3) 00:30:44.341 3485.324 - 3500.218: 99.6109% ( 7) 00:30:44.341 3500.218 - 3515.113: 99.6159% ( 3) 00:30:44.341 3515.113 - 3530.007: 99.6258% ( 6) 00:30:44.341 3530.007 - 3544.902: 99.6342% ( 5) 00:30:44.341 3544.902 - 3559.796: 99.6441% ( 6) 00:30:44.341 3559.796 - 3574.691: 99.6591% ( 9) 00:30:44.341 3574.691 - 3589.585: 99.6691% ( 6) 00:30:44.341 3589.585 - 3604.480: 99.6807% ( 7) 00:30:44.341 3604.480 - 3619.375: 99.6907% ( 6) 00:30:44.341 3619.375 - 3634.269: 99.6974% ( 4) 00:30:44.341 3634.269 - 3649.164: 99.7107% ( 8) 00:30:44.341 3649.164 - 3664.058: 99.7156% ( 3) 00:30:44.341 3664.058 - 3678.953: 99.7273% ( 7) 00:30:44.341 3678.953 - 3693.847: 99.7423% ( 9) 00:30:44.341 3693.847 - 3708.742: 99.7506% ( 5) 00:30:44.341 3708.742 - 3723.636: 99.7622% ( 7) 00:30:44.341 3723.636 - 3738.531: 99.7738% ( 7) 00:30:44.341 3738.531 - 3753.425: 99.7938% ( 12) 00:30:44.341 3753.425 - 3768.320: 99.8071% ( 8) 00:30:44.341 3768.320 - 3783.215: 99.8138% ( 4) 00:30:44.341 3783.215 - 3798.109: 99.8287% ( 9) 00:30:44.341 3798.109 - 3813.004: 99.8337% ( 3) 00:30:44.341 3813.004 - 3842.793: 99.8487% ( 9) 00:30:44.341 3842.793 - 3872.582: 99.8603% ( 7) 00:30:44.341 3872.582 - 3902.371: 99.8720% ( 7) 00:30:44.341 3902.371 - 3932.160: 99.8803% ( 5) 00:30:44.341 3932.160 - 3961.949: 99.8853% ( 3) 00:30:44.341 3961.949 - 3991.738: 99.8869% ( 1) 00:30:44.341 3991.738 - 4021.527: 99.8936% ( 4) 00:30:44.341 4021.527 - 4051.316: 99.8969% ( 2) 00:30:44.341 4051.316 - 4081.105: 99.8986% ( 1) 00:30:44.341 4081.105 - 4110.895: 99.9019% ( 2) 00:30:44.341 4110.895 - 4140.684: 99.9036% ( 1) 00:30:44.341 4140.684 - 4170.473: 99.9052% ( 1) 00:30:44.341 4170.473 - 4200.262: 99.9069% ( 1) 00:30:44.341 4200.262 - 4230.051: 99.9102% ( 2) 00:30:44.341 4230.051 - 4259.840: 99.9119% ( 1) 00:30:44.341 4259.840 - 4289.629: 99.9169% ( 3) 00:30:44.341 4289.629 - 4319.418: 99.9218% ( 3) 00:30:44.341 4319.418 - 4349.207: 99.9235% ( 1) 00:30:44.341 4349.207 - 4378.996: 99.9252% ( 1) 00:30:44.341 4378.996 - 4408.785: 99.9268% ( 1) 00:30:44.341 4408.785 - 4438.575: 99.9302% ( 2) 00:30:44.341 4438.575 - 4468.364: 99.9318% ( 1) 00:30:44.341 4468.364 - 4498.153: 99.9335% ( 1) 00:30:44.341 4498.153 - 4527.942: 99.9351% ( 1) 00:30:44.341 4527.942 - 4557.731: 99.9385% ( 2) 00:30:44.341 4557.731 - 4587.520: 99.9401% ( 1) 00:30:44.341 4587.520 - 4617.309: 99.9418% ( 1) 00:30:44.341 4617.309 - 4647.098: 99.9435% ( 1) 00:30:44.341 4647.098 - 4676.887: 99.9468% ( 2) 00:30:44.341 4676.887 - 4706.676: 99.9485% ( 1) 00:30:44.341 4706.676 - 4736.465: 99.9501% ( 1) 00:30:44.341 4736.465 - 4766.255: 99.9518% ( 1) 00:30:44.341 4766.255 - 4796.044: 99.9534% ( 1) 00:30:44.341 4796.044 - 4825.833: 99.9568% ( 2) 00:30:44.341 4825.833 - 4855.622: 99.9584% ( 1) 00:30:44.341 4855.622 - 4885.411: 99.9601% ( 1) 00:30:44.341 4885.411 - 4915.200: 99.9634% ( 2) 00:30:44.341 4944.989 - 4974.778: 99.9667% ( 2) 00:30:44.341 4974.778 - 5004.567: 99.9684% ( 1) 00:30:44.341 5004.567 - 5034.356: 99.9701% ( 1) 00:30:44.341 5034.356 - 5064.145: 99.9717% ( 1) 00:30:44.341 5093.935 - 5123.724: 99.9751% ( 2) 00:30:44.341 5123.724 - 5153.513: 99.9767% ( 1) 00:30:44.341 5153.513 - 5183.302: 99.9784% ( 1) 00:30:44.341 5183.302 - 5213.091: 99.9800% ( 1) 00:30:44.341 5213.091 - 5242.880: 99.9817% ( 1) 00:30:44.341 5242.880 - 5272.669: 99.9834% ( 1) 00:30:44.341 5272.669 - 5302.458: 99.9850% ( 1) 00:30:44.341 5302.458 - 5332.247: 99.9867% ( 1) 00:30:44.341 5332.247 - 5362.036: 99.9884% ( 1) 00:30:44.341 5391.825 - 5421.615: 99.9900% ( 1) 00:30:44.341 5779.084 - 5808.873: 99.9917% ( 1) 00:30:44.341 5808.873 - 5838.662: 99.9933% ( 1) 00:30:44.341 6255.709 - 6285.498: 99.9950% ( 1) 00:30:44.341 6434.444 - 6464.233: 99.9967% ( 1) 00:30:44.341 6881.280 - 6911.069: 99.9983% ( 1) 00:30:44.341 8340.945 - 8400.524: 100.0000% ( 1) 00:30:44.341 00:30:44.341 17:09:33 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:30:44.341 00:30:44.341 real 0m2.693s 00:30:44.341 user 0m2.277s 00:30:44.341 sys 0m0.247s 00:30:44.341 17:09:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:44.341 17:09:33 -- common/autotest_common.sh@10 -- # set +x 00:30:44.341 ************************************ 00:30:44.341 END TEST nvme_perf 00:30:44.341 ************************************ 00:30:44.341 17:09:33 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:44.341 17:09:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:30:44.341 17:09:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:44.341 17:09:33 -- common/autotest_common.sh@10 -- # set +x 00:30:44.341 ************************************ 00:30:44.341 START TEST nvme_hello_world 00:30:44.341 ************************************ 00:30:44.341 17:09:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:44.611 Initializing NVMe Controllers 00:30:44.611 Attached to 0000:00:06.0 00:30:44.611 Namespace ID: 1 size: 5GB 00:30:44.611 Initialization complete. 00:30:44.611 INFO: using host memory buffer for IO 00:30:44.611 Hello world! 00:30:44.611 00:30:44.611 real 0m0.330s 00:30:44.611 user 0m0.078s 00:30:44.611 sys 0m0.171s 00:30:44.611 17:09:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:44.611 ************************************ 00:30:44.611 17:09:33 -- common/autotest_common.sh@10 -- # set +x 00:30:44.611 END TEST nvme_hello_world 00:30:44.611 ************************************ 00:30:44.611 17:09:33 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:44.611 17:09:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:44.611 17:09:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:44.611 17:09:33 -- common/autotest_common.sh@10 -- # set +x 00:30:44.611 ************************************ 00:30:44.611 START TEST nvme_sgl 00:30:44.611 ************************************ 00:30:44.611 17:09:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:44.888 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:30:44.888 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:30:44.888 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:30:44.888 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:30:44.888 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:30:45.145 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:30:45.145 NVMe Readv/Writev Request test 00:30:45.145 Attached to 0000:00:06.0 00:30:45.145 0000:00:06.0: build_io_request_2 test passed 00:30:45.145 0000:00:06.0: build_io_request_4 test passed 00:30:45.145 0000:00:06.0: build_io_request_5 test passed 00:30:45.145 0000:00:06.0: build_io_request_6 test passed 00:30:45.145 0000:00:06.0: build_io_request_7 test passed 00:30:45.145 0000:00:06.0: build_io_request_10 test passed 00:30:45.145 Cleaning up... 00:30:45.145 00:30:45.145 real 0m0.367s 00:30:45.145 user 0m0.155s 00:30:45.145 sys 0m0.155s 00:30:45.145 17:09:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:45.145 ************************************ 00:30:45.145 END TEST nvme_sgl 00:30:45.145 ************************************ 00:30:45.145 17:09:33 -- common/autotest_common.sh@10 -- # set +x 00:30:45.145 17:09:33 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:45.145 17:09:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:45.145 17:09:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:45.145 17:09:33 -- common/autotest_common.sh@10 -- # set +x 00:30:45.145 ************************************ 00:30:45.145 START TEST nvme_e2edp 00:30:45.145 ************************************ 00:30:45.145 17:09:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:45.403 NVMe Write/Read with End-to-End data protection test 00:30:45.403 Attached to 0000:00:06.0 00:30:45.403 Cleaning up... 00:30:45.403 00:30:45.403 real 0m0.300s 00:30:45.403 user 0m0.101s 00:30:45.403 sys 0m0.127s 00:30:45.403 17:09:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:45.403 17:09:34 -- common/autotest_common.sh@10 -- # set +x 00:30:45.403 ************************************ 00:30:45.403 END TEST nvme_e2edp 00:30:45.403 ************************************ 00:30:45.403 17:09:34 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:45.403 17:09:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:45.403 17:09:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:45.403 17:09:34 -- common/autotest_common.sh@10 -- # set +x 00:30:45.403 ************************************ 00:30:45.403 START TEST nvme_reserve 00:30:45.403 ************************************ 00:30:45.403 17:09:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:45.661 ===================================================== 00:30:45.661 NVMe Controller at PCI bus 0, device 6, function 0 00:30:45.661 ===================================================== 00:30:45.661 Reservations: Not Supported 00:30:45.661 Reservation test passed 00:30:45.661 00:30:45.661 real 0m0.298s 00:30:45.661 user 0m0.117s 00:30:45.661 sys 0m0.109s 00:30:45.661 17:09:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:45.661 ************************************ 00:30:45.661 END TEST nvme_reserve 00:30:45.661 17:09:34 -- common/autotest_common.sh@10 -- # set +x 00:30:45.661 ************************************ 00:30:45.661 17:09:34 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:45.661 17:09:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:45.661 17:09:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:45.661 17:09:34 -- common/autotest_common.sh@10 -- # set +x 00:30:45.661 ************************************ 00:30:45.661 START TEST nvme_err_injection 00:30:45.661 ************************************ 00:30:45.661 17:09:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:46.227 NVMe Error Injection test 00:30:46.227 Attached to 0000:00:06.0 00:30:46.227 0000:00:06.0: get features failed as expected 00:30:46.227 0000:00:06.0: get features successfully as expected 00:30:46.227 0000:00:06.0: read failed as expected 00:30:46.227 0000:00:06.0: read successfully as expected 00:30:46.227 Cleaning up... 00:30:46.227 00:30:46.227 real 0m0.313s 00:30:46.227 user 0m0.107s 00:30:46.227 sys 0m0.133s 00:30:46.227 17:09:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:46.227 ************************************ 00:30:46.227 END TEST nvme_err_injection 00:30:46.227 ************************************ 00:30:46.227 17:09:34 -- common/autotest_common.sh@10 -- # set +x 00:30:46.227 17:09:34 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:46.227 17:09:34 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:30:46.227 17:09:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:46.227 17:09:34 -- common/autotest_common.sh@10 -- # set +x 00:30:46.227 ************************************ 00:30:46.227 START TEST nvme_overhead 00:30:46.227 ************************************ 00:30:46.227 17:09:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:47.603 Initializing NVMe Controllers 00:30:47.603 Attached to 0000:00:06.0 00:30:47.603 Initialization complete. Launching workers. 00:30:47.603 submit (in ns) avg, min, max = 15173.9, 10885.5, 64816.4 00:30:47.603 complete (in ns) avg, min, max = 9464.4, 7577.3, 117107.7 00:30:47.603 00:30:47.603 Submit histogram 00:30:47.603 ================ 00:30:47.603 Range in us Cumulative Count 00:30:47.603 10.880 - 10.938: 0.0121% ( 1) 00:30:47.603 12.160 - 12.218: 0.0484% ( 3) 00:30:47.603 12.218 - 12.276: 0.4964% ( 37) 00:30:47.603 12.276 - 12.335: 1.7676% ( 105) 00:30:47.603 12.335 - 12.393: 3.9346% ( 179) 00:30:47.603 12.393 - 12.451: 6.0169% ( 172) 00:30:47.603 12.451 - 12.509: 7.8814% ( 154) 00:30:47.603 12.509 - 12.567: 10.6174% ( 226) 00:30:47.603 12.567 - 12.625: 15.7264% ( 422) 00:30:47.603 12.625 - 12.684: 21.6586% ( 490) 00:30:47.603 12.684 - 12.742: 26.2591% ( 380) 00:30:47.603 12.742 - 12.800: 30.4116% ( 343) 00:30:47.603 12.800 - 12.858: 35.1332% ( 390) 00:30:47.603 12.858 - 12.916: 41.3923% ( 517) 00:30:47.603 12.916 - 12.975: 48.7288% ( 606) 00:30:47.603 12.975 - 13.033: 54.0194% ( 437) 00:30:47.603 13.033 - 13.091: 58.4140% ( 363) 00:30:47.603 13.091 - 13.149: 61.4044% ( 247) 00:30:47.603 13.149 - 13.207: 64.7579% ( 277) 00:30:47.603 13.207 - 13.265: 68.3898% ( 300) 00:30:47.603 13.265 - 13.324: 71.6707% ( 271) 00:30:47.603 13.324 - 13.382: 73.8378% ( 179) 00:30:47.603 13.382 - 13.440: 75.0847% ( 103) 00:30:47.603 13.440 - 13.498: 76.2833% ( 99) 00:30:47.603 13.498 - 13.556: 77.4455% ( 96) 00:30:47.603 13.556 - 13.615: 78.6077% ( 96) 00:30:47.603 13.615 - 13.673: 79.3341% ( 60) 00:30:47.603 13.673 - 13.731: 79.9031% ( 47) 00:30:47.603 13.731 - 13.789: 80.1937% ( 24) 00:30:47.603 13.789 - 13.847: 80.6416% ( 37) 00:30:47.603 13.847 - 13.905: 81.0775% ( 36) 00:30:47.603 13.905 - 13.964: 81.3923% ( 26) 00:30:47.603 13.964 - 14.022: 81.5860% ( 16) 00:30:47.603 14.022 - 14.080: 81.7554% ( 14) 00:30:47.603 14.080 - 14.138: 81.9249% ( 14) 00:30:47.603 14.138 - 14.196: 82.0218% ( 8) 00:30:47.603 14.196 - 14.255: 82.1550% ( 11) 00:30:47.603 14.255 - 14.313: 82.2881% ( 11) 00:30:47.603 14.313 - 14.371: 82.4697% ( 15) 00:30:47.603 14.371 - 14.429: 82.7240% ( 21) 00:30:47.603 14.429 - 14.487: 82.9540% ( 19) 00:30:47.603 14.487 - 14.545: 83.2082% ( 21) 00:30:47.603 14.545 - 14.604: 83.3535% ( 12) 00:30:47.603 14.604 - 14.662: 83.4867% ( 11) 00:30:47.603 14.662 - 14.720: 83.6199% ( 11) 00:30:47.603 14.720 - 14.778: 83.6804% ( 5) 00:30:47.603 14.778 - 14.836: 83.8620% ( 15) 00:30:47.603 14.836 - 14.895: 83.8983% ( 3) 00:30:47.603 14.895 - 15.011: 84.0678% ( 14) 00:30:47.603 15.011 - 15.127: 84.2131% ( 12) 00:30:47.603 15.127 - 15.244: 84.3220% ( 9) 00:30:47.603 15.244 - 15.360: 84.3826% ( 5) 00:30:47.603 15.360 - 15.476: 84.4189% ( 3) 00:30:47.603 15.476 - 15.593: 84.4794% ( 5) 00:30:47.603 15.593 - 15.709: 84.5157% ( 3) 00:30:47.603 15.709 - 15.825: 84.5884% ( 6) 00:30:47.603 15.825 - 15.942: 84.6247% ( 3) 00:30:47.603 15.942 - 16.058: 84.6852% ( 5) 00:30:47.603 16.058 - 16.175: 84.7337% ( 4) 00:30:47.603 16.175 - 16.291: 84.7458% ( 1) 00:30:47.603 16.291 - 16.407: 84.7821% ( 3) 00:30:47.603 16.407 - 16.524: 84.7942% ( 1) 00:30:47.603 16.524 - 16.640: 84.8184% ( 2) 00:30:47.603 16.640 - 16.756: 84.8547% ( 3) 00:30:47.603 16.756 - 16.873: 84.9031% ( 4) 00:30:47.603 16.873 - 16.989: 84.9758% ( 6) 00:30:47.603 16.989 - 17.105: 85.0121% ( 3) 00:30:47.603 17.105 - 17.222: 85.0363% ( 2) 00:30:47.603 17.222 - 17.338: 85.0726% ( 3) 00:30:47.603 17.338 - 17.455: 85.0847% ( 1) 00:30:47.603 17.455 - 17.571: 85.1211% ( 3) 00:30:47.603 17.571 - 17.687: 85.1453% ( 2) 00:30:47.603 17.687 - 17.804: 85.1937% ( 4) 00:30:47.603 17.804 - 17.920: 85.2300% ( 3) 00:30:47.603 17.920 - 18.036: 85.2785% ( 4) 00:30:47.603 18.036 - 18.153: 85.3269% ( 4) 00:30:47.603 18.153 - 18.269: 85.3753% ( 4) 00:30:47.603 18.269 - 18.385: 85.4479% ( 6) 00:30:47.603 18.385 - 18.502: 85.4600% ( 1) 00:30:47.603 18.502 - 18.618: 85.4843% ( 2) 00:30:47.603 18.618 - 18.735: 85.5569% ( 6) 00:30:47.603 18.735 - 18.851: 85.6174% ( 5) 00:30:47.603 18.851 - 18.967: 85.6538% ( 3) 00:30:47.603 18.967 - 19.084: 85.6659% ( 1) 00:30:47.603 19.200 - 19.316: 85.7143% ( 4) 00:30:47.603 19.316 - 19.433: 85.7748% ( 5) 00:30:47.603 19.433 - 19.549: 85.8111% ( 3) 00:30:47.603 19.549 - 19.665: 85.8475% ( 3) 00:30:47.603 19.665 - 19.782: 85.8838% ( 3) 00:30:47.603 19.782 - 19.898: 85.9322% ( 4) 00:30:47.603 19.898 - 20.015: 85.9927% ( 5) 00:30:47.603 20.015 - 20.131: 86.0048% ( 1) 00:30:47.603 20.247 - 20.364: 86.0533% ( 4) 00:30:47.603 20.364 - 20.480: 86.0775% ( 2) 00:30:47.603 20.480 - 20.596: 86.1259% ( 4) 00:30:47.603 20.596 - 20.713: 86.1501% ( 2) 00:30:47.603 20.713 - 20.829: 86.1985% ( 4) 00:30:47.603 20.829 - 20.945: 86.2349% ( 3) 00:30:47.603 20.945 - 21.062: 86.2591% ( 2) 00:30:47.603 21.062 - 21.178: 86.2712% ( 1) 00:30:47.603 21.178 - 21.295: 86.3196% ( 4) 00:30:47.603 21.295 - 21.411: 86.3317% ( 1) 00:30:47.603 21.411 - 21.527: 86.3801% ( 4) 00:30:47.603 21.527 - 21.644: 86.4044% ( 2) 00:30:47.603 21.644 - 21.760: 86.4649% ( 5) 00:30:47.603 21.760 - 21.876: 86.4891% ( 2) 00:30:47.603 21.876 - 21.993: 86.5133% ( 2) 00:30:47.603 21.993 - 22.109: 86.5375% ( 2) 00:30:47.603 22.109 - 22.225: 86.5738% ( 3) 00:30:47.603 22.225 - 22.342: 86.6102% ( 3) 00:30:47.603 22.342 - 22.458: 86.6223% ( 1) 00:30:47.603 22.458 - 22.575: 86.6828% ( 5) 00:30:47.603 22.575 - 22.691: 86.7070% ( 2) 00:30:47.603 22.691 - 22.807: 86.7191% ( 1) 00:30:47.603 22.807 - 22.924: 86.7433% ( 2) 00:30:47.603 22.924 - 23.040: 86.7797% ( 3) 00:30:47.603 23.040 - 23.156: 86.7918% ( 1) 00:30:47.603 23.156 - 23.273: 86.8523% ( 5) 00:30:47.603 23.273 - 23.389: 86.8644% ( 1) 00:30:47.603 23.389 - 23.505: 86.9370% ( 6) 00:30:47.603 23.505 - 23.622: 86.9613% ( 2) 00:30:47.603 23.622 - 23.738: 87.0097% ( 4) 00:30:47.603 23.738 - 23.855: 87.0460% ( 3) 00:30:47.603 23.855 - 23.971: 87.0702% ( 2) 00:30:47.603 23.971 - 24.087: 87.0944% ( 2) 00:30:47.603 24.087 - 24.204: 87.1186% ( 2) 00:30:47.603 24.204 - 24.320: 87.1308% ( 1) 00:30:47.603 24.320 - 24.436: 87.1429% ( 1) 00:30:47.603 24.436 - 24.553: 87.1550% ( 1) 00:30:47.603 24.553 - 24.669: 87.2034% ( 4) 00:30:47.603 24.785 - 24.902: 87.2155% ( 1) 00:30:47.603 24.902 - 25.018: 87.2518% ( 3) 00:30:47.603 25.135 - 25.251: 87.2639% ( 1) 00:30:47.603 25.251 - 25.367: 87.2881% ( 2) 00:30:47.603 25.367 - 25.484: 87.3002% ( 1) 00:30:47.603 25.600 - 25.716: 87.3123% ( 1) 00:30:47.603 26.065 - 26.182: 87.3245% ( 1) 00:30:47.603 26.182 - 26.298: 87.3366% ( 1) 00:30:47.603 26.415 - 26.531: 87.3487% ( 1) 00:30:47.603 26.647 - 26.764: 87.3729% ( 2) 00:30:47.603 26.764 - 26.880: 87.4213% ( 4) 00:30:47.603 26.880 - 26.996: 87.4939% ( 6) 00:30:47.603 26.996 - 27.113: 87.6271% ( 11) 00:30:47.603 27.113 - 27.229: 87.8935% ( 22) 00:30:47.603 27.229 - 27.345: 88.4140% ( 43) 00:30:47.603 27.345 - 27.462: 89.0315% ( 51) 00:30:47.603 27.462 - 27.578: 90.1453% ( 92) 00:30:47.603 27.578 - 27.695: 91.3438% ( 99) 00:30:47.603 27.695 - 27.811: 92.4334% ( 90) 00:30:47.603 27.811 - 27.927: 93.2567% ( 68) 00:30:47.603 27.927 - 28.044: 93.9104% ( 54) 00:30:47.603 28.044 - 28.160: 94.5521% ( 53) 00:30:47.603 28.160 - 28.276: 95.1453% ( 49) 00:30:47.603 28.276 - 28.393: 95.9443% ( 66) 00:30:47.603 28.393 - 28.509: 96.8765% ( 77) 00:30:47.603 28.509 - 28.625: 97.4697% ( 49) 00:30:47.603 28.625 - 28.742: 98.0266% ( 46) 00:30:47.603 28.742 - 28.858: 98.3777% ( 29) 00:30:47.603 28.858 - 28.975: 98.5714% ( 16) 00:30:47.603 28.975 - 29.091: 98.7409% ( 14) 00:30:47.603 29.091 - 29.207: 98.8862% ( 12) 00:30:47.603 29.207 - 29.324: 99.0557% ( 14) 00:30:47.603 29.324 - 29.440: 99.0920% ( 3) 00:30:47.603 29.440 - 29.556: 99.1162% ( 2) 00:30:47.603 29.556 - 29.673: 99.1404% ( 2) 00:30:47.603 29.673 - 29.789: 99.1525% ( 1) 00:30:47.603 29.789 - 30.022: 99.1889% ( 3) 00:30:47.603 30.022 - 30.255: 99.2252% ( 3) 00:30:47.603 30.255 - 30.487: 99.2494% ( 2) 00:30:47.603 30.487 - 30.720: 99.2736% ( 2) 00:30:47.603 30.720 - 30.953: 99.3099% ( 3) 00:30:47.603 30.953 - 31.185: 99.3220% ( 1) 00:30:47.603 31.185 - 31.418: 99.3462% ( 2) 00:30:47.603 31.651 - 31.884: 99.3705% ( 2) 00:30:47.603 32.349 - 32.582: 99.3826% ( 1) 00:30:47.603 32.815 - 33.047: 99.4068% ( 2) 00:30:47.603 33.047 - 33.280: 99.4189% ( 1) 00:30:47.603 33.745 - 33.978: 99.4673% ( 4) 00:30:47.604 33.978 - 34.211: 99.5036% ( 3) 00:30:47.604 34.211 - 34.444: 99.5521% ( 4) 00:30:47.604 34.676 - 34.909: 99.5763% ( 2) 00:30:47.604 34.909 - 35.142: 99.6126% ( 3) 00:30:47.604 35.142 - 35.375: 99.6489% ( 3) 00:30:47.604 35.375 - 35.607: 99.6610% ( 1) 00:30:47.604 35.607 - 35.840: 99.6731% ( 1) 00:30:47.604 35.840 - 36.073: 99.6852% ( 1) 00:30:47.604 36.073 - 36.305: 99.6973% ( 1) 00:30:47.604 36.538 - 36.771: 99.7337% ( 3) 00:30:47.604 36.771 - 37.004: 99.7458% ( 1) 00:30:47.604 37.004 - 37.236: 99.7579% ( 1) 00:30:47.604 38.633 - 38.865: 99.7700% ( 1) 00:30:47.604 39.098 - 39.331: 99.7821% ( 1) 00:30:47.604 40.029 - 40.262: 99.7942% ( 1) 00:30:47.604 42.589 - 42.822: 99.8063% ( 1) 00:30:47.604 43.055 - 43.287: 99.8184% ( 1) 00:30:47.604 43.520 - 43.753: 99.8305% ( 1) 00:30:47.604 43.753 - 43.985: 99.8426% ( 1) 00:30:47.604 43.985 - 44.218: 99.8668% ( 2) 00:30:47.604 45.149 - 45.382: 99.8789% ( 1) 00:30:47.604 46.080 - 46.313: 99.8910% ( 1) 00:30:47.604 48.640 - 48.873: 99.9031% ( 1) 00:30:47.604 49.804 - 50.036: 99.9153% ( 1) 00:30:47.604 55.389 - 55.622: 99.9395% ( 2) 00:30:47.604 57.949 - 58.182: 99.9516% ( 1) 00:30:47.604 60.975 - 61.440: 99.9758% ( 2) 00:30:47.604 61.440 - 61.905: 99.9879% ( 1) 00:30:47.604 64.698 - 65.164: 100.0000% ( 1) 00:30:47.604 00:30:47.604 Complete histogram 00:30:47.604 ================== 00:30:47.604 Range in us Cumulative Count 00:30:47.604 7.564 - 7.622: 0.2179% ( 18) 00:30:47.604 7.622 - 7.680: 1.4528% ( 102) 00:30:47.604 7.680 - 7.738: 3.8257% ( 196) 00:30:47.604 7.738 - 7.796: 5.5932% ( 146) 00:30:47.604 7.796 - 7.855: 7.4697% ( 155) 00:30:47.604 7.855 - 7.913: 12.5061% ( 416) 00:30:47.604 7.913 - 7.971: 18.9467% ( 532) 00:30:47.604 7.971 - 8.029: 22.4576% ( 290) 00:30:47.604 8.029 - 8.087: 25.0484% ( 214) 00:30:47.604 8.087 - 8.145: 31.7676% ( 555) 00:30:47.604 8.145 - 8.204: 41.3196% ( 789) 00:30:47.604 8.204 - 8.262: 46.4165% ( 421) 00:30:47.604 8.262 - 8.320: 48.7530% ( 193) 00:30:47.604 8.320 - 8.378: 54.6489% ( 487) 00:30:47.604 8.378 - 8.436: 64.9637% ( 852) 00:30:47.604 8.436 - 8.495: 71.1017% ( 507) 00:30:47.604 8.495 - 8.553: 72.8571% ( 145) 00:30:47.604 8.553 - 8.611: 74.8426% ( 164) 00:30:47.604 8.611 - 8.669: 79.6973% ( 401) 00:30:47.604 8.669 - 8.727: 83.4625% ( 311) 00:30:47.604 8.727 - 8.785: 84.6731% ( 100) 00:30:47.604 8.785 - 8.844: 85.2542% ( 48) 00:30:47.604 8.844 - 8.902: 86.0169% ( 63) 00:30:47.604 8.902 - 8.960: 87.4576% ( 119) 00:30:47.604 8.960 - 9.018: 88.2324% ( 64) 00:30:47.604 9.018 - 9.076: 88.5956% ( 30) 00:30:47.604 9.076 - 9.135: 88.9225% ( 27) 00:30:47.604 9.135 - 9.193: 89.4189% ( 41) 00:30:47.604 9.193 - 9.251: 89.9879% ( 47) 00:30:47.604 9.251 - 9.309: 90.2906% ( 25) 00:30:47.604 9.309 - 9.367: 90.5327% ( 20) 00:30:47.604 9.367 - 9.425: 90.6174% ( 7) 00:30:47.604 9.425 - 9.484: 90.8232% ( 17) 00:30:47.604 9.484 - 9.542: 90.9322% ( 9) 00:30:47.604 9.542 - 9.600: 91.0654% ( 11) 00:30:47.604 9.600 - 9.658: 91.1259% ( 5) 00:30:47.604 9.658 - 9.716: 91.2107% ( 7) 00:30:47.604 9.716 - 9.775: 91.3075% ( 8) 00:30:47.604 9.775 - 9.833: 91.3923% ( 7) 00:30:47.604 9.833 - 9.891: 91.4891% ( 8) 00:30:47.604 9.891 - 9.949: 91.5133% ( 2) 00:30:47.604 9.949 - 10.007: 91.5738% ( 5) 00:30:47.604 10.007 - 10.065: 91.6223% ( 4) 00:30:47.604 10.065 - 10.124: 91.6586% ( 3) 00:30:47.604 10.124 - 10.182: 91.6949% ( 3) 00:30:47.604 10.182 - 10.240: 91.7191% ( 2) 00:30:47.604 10.240 - 10.298: 91.7554% ( 3) 00:30:47.604 10.298 - 10.356: 91.8039% ( 4) 00:30:47.604 10.356 - 10.415: 91.8281% ( 2) 00:30:47.604 10.415 - 10.473: 91.8523% ( 2) 00:30:47.604 10.473 - 10.531: 91.8765% ( 2) 00:30:47.604 10.531 - 10.589: 91.9128% ( 3) 00:30:47.604 10.647 - 10.705: 91.9249% ( 1) 00:30:47.604 10.764 - 10.822: 91.9492% ( 2) 00:30:47.604 10.822 - 10.880: 91.9734% ( 2) 00:30:47.604 10.996 - 11.055: 92.0097% ( 3) 00:30:47.604 11.055 - 11.113: 92.0218% ( 1) 00:30:47.604 11.113 - 11.171: 92.0581% ( 3) 00:30:47.604 11.229 - 11.287: 92.0702% ( 1) 00:30:47.604 11.345 - 11.404: 92.0823% ( 1) 00:30:47.604 11.404 - 11.462: 92.0944% ( 1) 00:30:47.604 11.927 - 11.985: 92.1065% ( 1) 00:30:47.604 12.102 - 12.160: 92.1186% ( 1) 00:30:47.604 12.160 - 12.218: 92.1308% ( 1) 00:30:47.604 12.218 - 12.276: 92.1429% ( 1) 00:30:47.604 12.393 - 12.451: 92.1550% ( 1) 00:30:47.604 12.567 - 12.625: 92.1792% ( 2) 00:30:47.604 12.625 - 12.684: 92.1913% ( 1) 00:30:47.604 12.742 - 12.800: 92.2034% ( 1) 00:30:47.604 12.858 - 12.916: 92.2155% ( 1) 00:30:47.604 12.975 - 13.033: 92.2276% ( 1) 00:30:47.604 13.033 - 13.091: 92.2397% ( 1) 00:30:47.604 13.149 - 13.207: 92.2518% ( 1) 00:30:47.604 13.265 - 13.324: 92.2639% ( 1) 00:30:47.604 13.324 - 13.382: 92.2760% ( 1) 00:30:47.604 13.440 - 13.498: 92.3002% ( 2) 00:30:47.604 13.498 - 13.556: 92.3366% ( 3) 00:30:47.604 13.556 - 13.615: 92.3487% ( 1) 00:30:47.604 13.615 - 13.673: 92.3608% ( 1) 00:30:47.604 13.673 - 13.731: 92.3971% ( 3) 00:30:47.604 13.731 - 13.789: 92.4213% ( 2) 00:30:47.604 13.789 - 13.847: 92.4576% ( 3) 00:30:47.604 13.847 - 13.905: 92.4818% ( 2) 00:30:47.604 13.905 - 13.964: 92.5061% ( 2) 00:30:47.604 13.964 - 14.022: 92.5303% ( 2) 00:30:47.604 14.022 - 14.080: 92.5424% ( 1) 00:30:47.604 14.080 - 14.138: 92.5666% ( 2) 00:30:47.604 14.138 - 14.196: 92.5908% ( 2) 00:30:47.604 14.313 - 14.371: 92.6029% ( 1) 00:30:47.604 14.429 - 14.487: 92.6150% ( 1) 00:30:47.604 14.545 - 14.604: 92.6392% ( 2) 00:30:47.604 14.604 - 14.662: 92.6634% ( 2) 00:30:47.604 14.662 - 14.720: 92.6877% ( 2) 00:30:47.604 14.720 - 14.778: 92.6998% ( 1) 00:30:47.604 14.778 - 14.836: 92.7119% ( 1) 00:30:47.604 14.836 - 14.895: 92.7240% ( 1) 00:30:47.604 14.895 - 15.011: 92.7603% ( 3) 00:30:47.604 15.011 - 15.127: 92.7966% ( 3) 00:30:47.604 15.244 - 15.360: 92.8208% ( 2) 00:30:47.604 15.360 - 15.476: 92.8329% ( 1) 00:30:47.604 15.476 - 15.593: 92.8450% ( 1) 00:30:47.604 15.593 - 15.709: 92.8692% ( 2) 00:30:47.604 15.709 - 15.825: 92.8814% ( 1) 00:30:47.604 15.825 - 15.942: 92.8935% ( 1) 00:30:47.604 15.942 - 16.058: 92.9056% ( 1) 00:30:47.604 16.058 - 16.175: 92.9177% ( 1) 00:30:47.604 16.175 - 16.291: 92.9298% ( 1) 00:30:47.604 16.291 - 16.407: 92.9419% ( 1) 00:30:47.604 16.407 - 16.524: 92.9661% ( 2) 00:30:47.604 16.756 - 16.873: 93.0024% ( 3) 00:30:47.604 16.873 - 16.989: 93.0145% ( 1) 00:30:47.604 16.989 - 17.105: 93.0266% ( 1) 00:30:47.604 17.105 - 17.222: 93.0508% ( 2) 00:30:47.604 17.222 - 17.338: 93.0630% ( 1) 00:30:47.604 17.338 - 17.455: 93.0751% ( 1) 00:30:47.604 17.571 - 17.687: 93.1114% ( 3) 00:30:47.604 17.920 - 18.036: 93.1235% ( 1) 00:30:47.604 18.036 - 18.153: 93.1477% ( 2) 00:30:47.604 18.153 - 18.269: 93.1598% ( 1) 00:30:47.604 18.269 - 18.385: 93.1719% ( 1) 00:30:47.604 18.385 - 18.502: 93.1840% ( 1) 00:30:47.604 18.618 - 18.735: 93.2203% ( 3) 00:30:47.604 18.851 - 18.967: 93.2446% ( 2) 00:30:47.604 19.200 - 19.316: 93.2567% ( 1) 00:30:47.604 19.316 - 19.433: 93.2688% ( 1) 00:30:47.604 19.549 - 19.665: 93.2809% ( 1) 00:30:47.604 20.596 - 20.713: 93.2930% ( 1) 00:30:47.604 20.713 - 20.829: 93.3051% ( 1) 00:30:47.604 22.109 - 22.225: 93.3293% ( 2) 00:30:47.604 22.225 - 22.342: 93.4019% ( 6) 00:30:47.604 22.342 - 22.458: 93.5109% ( 9) 00:30:47.604 22.458 - 22.575: 93.7893% ( 23) 00:30:47.604 22.575 - 22.691: 94.1525% ( 30) 00:30:47.604 22.691 - 22.807: 94.6731% ( 43) 00:30:47.604 22.807 - 22.924: 95.1332% ( 38) 00:30:47.604 22.924 - 23.040: 95.7385% ( 50) 00:30:47.604 23.040 - 23.156: 96.0169% ( 23) 00:30:47.604 23.156 - 23.273: 96.2591% ( 20) 00:30:47.604 23.273 - 23.389: 96.4165% ( 13) 00:30:47.604 23.389 - 23.505: 96.7312% ( 26) 00:30:47.604 23.505 - 23.622: 97.2639% ( 44) 00:30:47.604 23.622 - 23.738: 97.8571% ( 49) 00:30:47.604 23.738 - 23.855: 98.3777% ( 43) 00:30:47.604 23.855 - 23.971: 98.7893% ( 34) 00:30:47.604 23.971 - 24.087: 98.9952% ( 17) 00:30:47.604 24.087 - 24.204: 99.1525% ( 13) 00:30:47.604 24.204 - 24.320: 99.2373% ( 7) 00:30:47.604 24.320 - 24.436: 99.3462% ( 9) 00:30:47.604 24.436 - 24.553: 99.4189% ( 6) 00:30:47.604 24.553 - 24.669: 99.4310% ( 1) 00:30:47.604 24.669 - 24.785: 99.4431% ( 1) 00:30:47.604 24.785 - 24.902: 99.4673% ( 2) 00:30:47.604 25.018 - 25.135: 99.4794% ( 1) 00:30:47.604 25.135 - 25.251: 99.4915% ( 1) 00:30:47.605 25.600 - 25.716: 99.5157% ( 2) 00:30:47.605 25.833 - 25.949: 99.5521% ( 3) 00:30:47.605 26.647 - 26.764: 99.5642% ( 1) 00:30:47.605 27.229 - 27.345: 99.5884% ( 2) 00:30:47.605 27.578 - 27.695: 99.6005% ( 1) 00:30:47.605 27.695 - 27.811: 99.6126% ( 1) 00:30:47.605 27.811 - 27.927: 99.6247% ( 1) 00:30:47.605 28.160 - 28.276: 99.6368% ( 1) 00:30:47.605 28.625 - 28.742: 99.6489% ( 1) 00:30:47.605 28.858 - 28.975: 99.6610% ( 1) 00:30:47.605 28.975 - 29.091: 99.6731% ( 1) 00:30:47.605 29.440 - 29.556: 99.6852% ( 1) 00:30:47.605 29.789 - 30.022: 99.7094% ( 2) 00:30:47.605 30.255 - 30.487: 99.7337% ( 2) 00:30:47.605 31.185 - 31.418: 99.7458% ( 1) 00:30:47.605 31.418 - 31.651: 99.7579% ( 1) 00:30:47.605 31.651 - 31.884: 99.7700% ( 1) 00:30:47.605 31.884 - 32.116: 99.7821% ( 1) 00:30:47.605 32.349 - 32.582: 99.8305% ( 4) 00:30:47.605 33.745 - 33.978: 99.8426% ( 1) 00:30:47.605 34.211 - 34.444: 99.8547% ( 1) 00:30:47.605 34.444 - 34.676: 99.8668% ( 1) 00:30:47.605 36.538 - 36.771: 99.8910% ( 2) 00:30:47.605 38.865 - 39.098: 99.9031% ( 1) 00:30:47.605 39.098 - 39.331: 99.9153% ( 1) 00:30:47.605 39.331 - 39.564: 99.9274% ( 1) 00:30:47.605 41.425 - 41.658: 99.9395% ( 1) 00:30:47.605 44.684 - 44.916: 99.9516% ( 1) 00:30:47.605 44.916 - 45.149: 99.9637% ( 1) 00:30:47.605 53.527 - 53.760: 99.9758% ( 1) 00:30:47.605 113.571 - 114.036: 99.9879% ( 1) 00:30:47.605 116.829 - 117.295: 100.0000% ( 1) 00:30:47.605 00:30:47.605 00:30:47.605 real 0m1.318s 00:30:47.605 user 0m1.111s 00:30:47.605 sys 0m0.128s 00:30:47.605 17:09:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:47.605 ************************************ 00:30:47.605 END TEST nvme_overhead 00:30:47.605 ************************************ 00:30:47.605 17:09:36 -- common/autotest_common.sh@10 -- # set +x 00:30:47.605 17:09:36 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:47.605 17:09:36 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:30:47.605 17:09:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:47.605 17:09:36 -- common/autotest_common.sh@10 -- # set +x 00:30:47.605 ************************************ 00:30:47.605 START TEST nvme_arbitration 00:30:47.605 ************************************ 00:30:47.605 17:09:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:50.887 Initializing NVMe Controllers 00:30:50.887 Attached to 0000:00:06.0 00:30:50.887 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:30:50.887 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:30:50.887 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:30:50.887 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:30:50.887 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:30:50.887 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:30:50.887 Initialization complete. Launching workers. 00:30:50.887 Starting thread on core 1 with urgent priority queue 00:30:50.887 Starting thread on core 2 with urgent priority queue 00:30:50.887 Starting thread on core 0 with urgent priority queue 00:30:50.887 Starting thread on core 3 with urgent priority queue 00:30:50.887 QEMU NVMe Ctrl (12340 ) core 0: 1600.00 IO/s 62.50 secs/100000 ios 00:30:50.887 QEMU NVMe Ctrl (12340 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:30:50.887 QEMU NVMe Ctrl (12340 ) core 2: 1002.67 IO/s 99.73 secs/100000 ios 00:30:50.887 QEMU NVMe Ctrl (12340 ) core 3: 405.33 IO/s 246.71 secs/100000 ios 00:30:50.887 ======================================================== 00:30:50.887 00:30:51.144 00:30:51.144 real 0m3.511s 00:30:51.144 user 0m9.536s 00:30:51.144 sys 0m0.127s 00:30:51.144 17:09:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:51.144 ************************************ 00:30:51.144 END TEST nvme_arbitration 00:30:51.144 17:09:39 -- common/autotest_common.sh@10 -- # set +x 00:30:51.144 ************************************ 00:30:51.144 17:09:39 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:30:51.144 17:09:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:30:51.144 17:09:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:51.144 17:09:39 -- common/autotest_common.sh@10 -- # set +x 00:30:51.144 ************************************ 00:30:51.144 START TEST nvme_single_aen 00:30:51.144 ************************************ 00:30:51.144 17:09:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:30:51.144 [2024-11-05 17:09:39.896986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:51.144 [2024-11-05 17:09:39.897097] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.401 [2024-11-05 17:09:40.083051] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:51.401 Asynchronous Event Request test 00:30:51.401 Attached to 0000:00:06.0 00:30:51.401 Reset controller to setup AER completions for this process 00:30:51.401 Registering asynchronous event callbacks... 00:30:51.401 Getting orig temperature thresholds of all controllers 00:30:51.401 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:51.401 Setting all controllers temperature threshold low to trigger AER 00:30:51.401 Waiting for all controllers temperature threshold to be set lower 00:30:51.401 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:51.401 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:30:51.401 Waiting for all controllers to trigger AER and reset threshold 00:30:51.401 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:51.401 Cleaning up... 00:30:51.401 00:30:51.401 real 0m0.287s 00:30:51.401 user 0m0.083s 00:30:51.401 sys 0m0.149s 00:30:51.401 17:09:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:51.402 ************************************ 00:30:51.402 END TEST nvme_single_aen 00:30:51.402 ************************************ 00:30:51.402 17:09:40 -- common/autotest_common.sh@10 -- # set +x 00:30:51.402 17:09:40 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:30:51.402 17:09:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:51.402 17:09:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:51.402 17:09:40 -- common/autotest_common.sh@10 -- # set +x 00:30:51.402 ************************************ 00:30:51.402 START TEST nvme_doorbell_aers 00:30:51.402 ************************************ 00:30:51.402 17:09:40 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:30:51.402 17:09:40 -- nvme/nvme.sh@70 -- # bdfs=() 00:30:51.402 17:09:40 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:30:51.402 17:09:40 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:30:51.402 17:09:40 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:30:51.402 17:09:40 -- common/autotest_common.sh@1508 -- # bdfs=() 00:30:51.402 17:09:40 -- common/autotest_common.sh@1508 -- # local bdfs 00:30:51.402 17:09:40 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:51.402 17:09:40 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:51.402 17:09:40 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:30:51.402 17:09:40 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:30:51.402 17:09:40 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:30:51.402 17:09:40 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:51.402 17:09:40 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:30:51.659 [2024-11-05 17:09:40.521153] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 139426) is not found. Dropping the request. 00:31:01.624 Executing: test_write_invalid_db 00:31:01.624 Waiting for AER completion... 00:31:01.624 Failure: test_write_invalid_db 00:31:01.624 00:31:01.624 Executing: test_invalid_db_write_overflow_sq 00:31:01.624 Waiting for AER completion... 00:31:01.624 Failure: test_invalid_db_write_overflow_sq 00:31:01.624 00:31:01.624 Executing: test_invalid_db_write_overflow_cq 00:31:01.624 Waiting for AER completion... 00:31:01.624 Failure: test_invalid_db_write_overflow_cq 00:31:01.624 00:31:01.624 00:31:01.624 real 0m10.104s 00:31:01.624 user 0m8.687s 00:31:01.624 sys 0m1.372s 00:31:01.624 17:09:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:01.624 17:09:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.624 ************************************ 00:31:01.624 END TEST nvme_doorbell_aers 00:31:01.624 ************************************ 00:31:01.624 17:09:50 -- nvme/nvme.sh@97 -- # uname 00:31:01.624 17:09:50 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:31:01.624 17:09:50 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:31:01.624 17:09:50 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:31:01.624 17:09:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:01.624 17:09:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.624 ************************************ 00:31:01.624 START TEST nvme_multi_aen 00:31:01.624 ************************************ 00:31:01.624 17:09:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:31:01.624 [2024-11-05 17:09:50.395795] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:31:01.624 [2024-11-05 17:09:50.395938] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.883 [2024-11-05 17:09:50.599677] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:01.883 [2024-11-05 17:09:50.599724] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 139426) is not found. Dropping the request. 00:31:01.883 [2024-11-05 17:09:50.599819] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 139426) is not found. Dropping the request. 00:31:01.883 [2024-11-05 17:09:50.599848] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 139426) is not found. Dropping the request. 00:31:01.883 [2024-11-05 17:09:50.605783] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:31:01.883 Child process pid: 139614 00:31:01.883 [2024-11-05 17:09:50.605992] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.141 [Child] Asynchronous Event Request test 00:31:02.141 [Child] Attached to 0000:00:06.0 00:31:02.141 [Child] Registering asynchronous event callbacks... 00:31:02.141 [Child] Getting orig temperature thresholds of all controllers 00:31:02.141 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:02.141 [Child] Waiting for all controllers to trigger AER and reset threshold 00:31:02.141 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:02.141 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:02.141 [Child] Cleaning up... 00:31:02.141 Asynchronous Event Request test 00:31:02.141 Attached to 0000:00:06.0 00:31:02.141 Reset controller to setup AER completions for this process 00:31:02.141 Registering asynchronous event callbacks... 00:31:02.141 Getting orig temperature thresholds of all controllers 00:31:02.141 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:02.141 Setting all controllers temperature threshold low to trigger AER 00:31:02.141 Waiting for all controllers temperature threshold to be set lower 00:31:02.141 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:02.141 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:31:02.141 Waiting for all controllers to trigger AER and reset threshold 00:31:02.141 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:02.141 Cleaning up... 00:31:02.141 00:31:02.141 real 0m0.674s 00:31:02.141 user 0m0.276s 00:31:02.141 sys 0m0.256s 00:31:02.141 17:09:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:02.141 ************************************ 00:31:02.141 END TEST nvme_multi_aen 00:31:02.141 ************************************ 00:31:02.141 17:09:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.399 17:09:51 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:02.399 17:09:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:31:02.399 17:09:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:02.399 17:09:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.399 ************************************ 00:31:02.399 START TEST nvme_startup 00:31:02.399 ************************************ 00:31:02.399 17:09:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:02.657 Initializing NVMe Controllers 00:31:02.657 Attached to 0000:00:06.0 00:31:02.657 Initialization complete. 00:31:02.657 Time used:212581.672 (us). 00:31:02.657 00:31:02.657 real 0m0.304s 00:31:02.657 user 0m0.096s 00:31:02.657 sys 0m0.132s 00:31:02.657 17:09:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:02.657 17:09:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.657 ************************************ 00:31:02.657 END TEST nvme_startup 00:31:02.657 ************************************ 00:31:02.657 17:09:51 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:31:02.657 17:09:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:02.657 17:09:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:02.657 17:09:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.657 ************************************ 00:31:02.657 START TEST nvme_multi_secondary 00:31:02.657 ************************************ 00:31:02.657 17:09:51 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:31:02.657 17:09:51 -- nvme/nvme.sh@52 -- # pid0=139681 00:31:02.657 17:09:51 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:31:02.657 17:09:51 -- nvme/nvme.sh@54 -- # pid1=139682 00:31:02.657 17:09:51 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:31:02.657 17:09:51 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:05.939 Initializing NVMe Controllers 00:31:05.939 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:05.939 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:31:05.939 Initialization complete. Launching workers. 00:31:05.939 ======================================================== 00:31:05.939 Latency(us) 00:31:05.939 Device Information : IOPS MiB/s Average min max 00:31:05.939 PCIE (0000:00:06.0) NSID 1 from core 1: 31650.62 123.64 505.22 125.92 16774.82 00:31:05.939 ======================================================== 00:31:05.939 Total : 31650.62 123.64 505.22 125.92 16774.82 00:31:05.939 00:31:06.198 Initializing NVMe Controllers 00:31:06.198 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:06.198 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:31:06.198 Initialization complete. Launching workers. 00:31:06.198 ======================================================== 00:31:06.198 Latency(us) 00:31:06.198 Device Information : IOPS MiB/s Average min max 00:31:06.198 PCIE (0000:00:06.0) NSID 1 from core 2: 13823.67 54.00 1156.78 147.16 25174.48 00:31:06.198 ======================================================== 00:31:06.198 Total : 13823.67 54.00 1156.78 147.16 25174.48 00:31:06.198 00:31:06.456 17:09:55 -- nvme/nvme.sh@56 -- # wait 139681 00:31:08.357 Initializing NVMe Controllers 00:31:08.357 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:08.357 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:31:08.357 Initialization complete. Launching workers. 00:31:08.357 ======================================================== 00:31:08.357 Latency(us) 00:31:08.357 Device Information : IOPS MiB/s Average min max 00:31:08.357 PCIE (0000:00:06.0) NSID 1 from core 0: 41370.34 161.60 386.42 88.28 2643.76 00:31:08.357 ======================================================== 00:31:08.357 Total : 41370.34 161.60 386.42 88.28 2643.76 00:31:08.357 00:31:08.357 17:09:56 -- nvme/nvme.sh@57 -- # wait 139682 00:31:08.357 17:09:56 -- nvme/nvme.sh@61 -- # pid0=139756 00:31:08.357 17:09:56 -- nvme/nvme.sh@63 -- # pid1=139757 00:31:08.357 17:09:56 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:31:08.357 17:09:56 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:31:08.357 17:09:56 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:11.667 Initializing NVMe Controllers 00:31:11.667 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:11.667 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:31:11.667 Initialization complete. Launching workers. 00:31:11.667 ======================================================== 00:31:11.667 Latency(us) 00:31:11.667 Device Information : IOPS MiB/s Average min max 00:31:11.667 PCIE (0000:00:06.0) NSID 1 from core 1: 34862.66 136.18 458.61 122.90 16524.27 00:31:11.667 ======================================================== 00:31:11.667 Total : 34862.66 136.18 458.61 122.90 16524.27 00:31:11.667 00:31:11.667 Initializing NVMe Controllers 00:31:11.667 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:11.667 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:31:11.667 Initialization complete. Launching workers. 00:31:11.667 ======================================================== 00:31:11.667 Latency(us) 00:31:11.667 Device Information : IOPS MiB/s Average min max 00:31:11.667 PCIE (0000:00:06.0) NSID 1 from core 0: 36806.67 143.78 434.38 86.40 1327.21 00:31:11.667 ======================================================== 00:31:11.667 Total : 36806.67 143.78 434.38 86.40 1327.21 00:31:11.667 00:31:13.566 Initializing NVMe Controllers 00:31:13.566 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:13.566 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:31:13.566 Initialization complete. Launching workers. 00:31:13.566 ======================================================== 00:31:13.566 Latency(us) 00:31:13.566 Device Information : IOPS MiB/s Average min max 00:31:13.566 PCIE (0000:00:06.0) NSID 1 from core 2: 17409.60 68.01 918.70 140.35 20875.69 00:31:13.566 ======================================================== 00:31:13.566 Total : 17409.60 68.01 918.70 140.35 20875.69 00:31:13.566 00:31:13.566 17:10:02 -- nvme/nvme.sh@65 -- # wait 139756 00:31:13.566 17:10:02 -- nvme/nvme.sh@66 -- # wait 139757 00:31:13.566 00:31:13.566 real 0m10.989s 00:31:13.566 user 0m18.685s 00:31:13.566 sys 0m0.828s 00:31:13.566 17:10:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:13.566 17:10:02 -- common/autotest_common.sh@10 -- # set +x 00:31:13.566 ************************************ 00:31:13.566 END TEST nvme_multi_secondary 00:31:13.566 ************************************ 00:31:13.566 17:10:02 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:31:13.566 17:10:02 -- nvme/nvme.sh@102 -- # kill_stub 00:31:13.566 17:10:02 -- common/autotest_common.sh@1075 -- # [[ -e /proc/138977 ]] 00:31:13.566 17:10:02 -- common/autotest_common.sh@1076 -- # kill 138977 00:31:13.566 17:10:02 -- common/autotest_common.sh@1077 -- # wait 138977 00:31:14.500 [2024-11-05 17:10:03.399534] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 139613) is not found. Dropping the request. 00:31:14.500 [2024-11-05 17:10:03.399663] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 139613) is not found. Dropping the request. 00:31:14.500 [2024-11-05 17:10:03.399737] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 139613) is not found. Dropping the request. 00:31:14.500 [2024-11-05 17:10:03.399764] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 139613) is not found. Dropping the request. 00:31:14.758 17:10:03 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:31:14.758 17:10:03 -- common/autotest_common.sh@1083 -- # echo 2 00:31:14.758 17:10:03 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:14.758 17:10:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:14.758 17:10:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:14.758 17:10:03 -- common/autotest_common.sh@10 -- # set +x 00:31:14.758 ************************************ 00:31:14.758 START TEST bdev_nvme_reset_stuck_adm_cmd 00:31:14.758 ************************************ 00:31:14.758 17:10:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:15.017 * Looking for test storage... 00:31:15.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:15.017 17:10:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:31:15.017 17:10:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:31:15.017 17:10:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:31:15.017 17:10:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:31:15.017 17:10:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:31:15.017 17:10:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:31:15.017 17:10:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:31:15.017 17:10:03 -- scripts/common.sh@335 -- # IFS=.-: 00:31:15.017 17:10:03 -- scripts/common.sh@335 -- # read -ra ver1 00:31:15.017 17:10:03 -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.017 17:10:03 -- scripts/common.sh@336 -- # read -ra ver2 00:31:15.017 17:10:03 -- scripts/common.sh@337 -- # local 'op=<' 00:31:15.017 17:10:03 -- scripts/common.sh@339 -- # ver1_l=2 00:31:15.017 17:10:03 -- scripts/common.sh@340 -- # ver2_l=1 00:31:15.017 17:10:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:31:15.017 17:10:03 -- scripts/common.sh@343 -- # case "$op" in 00:31:15.017 17:10:03 -- scripts/common.sh@344 -- # : 1 00:31:15.017 17:10:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:31:15.017 17:10:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.017 17:10:03 -- scripts/common.sh@364 -- # decimal 1 00:31:15.017 17:10:03 -- scripts/common.sh@352 -- # local d=1 00:31:15.017 17:10:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.017 17:10:03 -- scripts/common.sh@354 -- # echo 1 00:31:15.017 17:10:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:31:15.017 17:10:03 -- scripts/common.sh@365 -- # decimal 2 00:31:15.017 17:10:03 -- scripts/common.sh@352 -- # local d=2 00:31:15.017 17:10:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.017 17:10:03 -- scripts/common.sh@354 -- # echo 2 00:31:15.017 17:10:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:31:15.017 17:10:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:31:15.017 17:10:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:31:15.017 17:10:03 -- scripts/common.sh@367 -- # return 0 00:31:15.017 17:10:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.017 17:10:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:31:15.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.017 --rc genhtml_branch_coverage=1 00:31:15.017 --rc genhtml_function_coverage=1 00:31:15.017 --rc genhtml_legend=1 00:31:15.017 --rc geninfo_all_blocks=1 00:31:15.017 --rc geninfo_unexecuted_blocks=1 00:31:15.017 00:31:15.017 ' 00:31:15.017 17:10:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:31:15.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.017 --rc genhtml_branch_coverage=1 00:31:15.017 --rc genhtml_function_coverage=1 00:31:15.017 --rc genhtml_legend=1 00:31:15.017 --rc geninfo_all_blocks=1 00:31:15.017 --rc geninfo_unexecuted_blocks=1 00:31:15.017 00:31:15.017 ' 00:31:15.017 17:10:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:31:15.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.017 --rc genhtml_branch_coverage=1 00:31:15.017 --rc genhtml_function_coverage=1 00:31:15.017 --rc genhtml_legend=1 00:31:15.017 --rc geninfo_all_blocks=1 00:31:15.017 --rc geninfo_unexecuted_blocks=1 00:31:15.017 00:31:15.017 ' 00:31:15.017 17:10:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:31:15.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.017 --rc genhtml_branch_coverage=1 00:31:15.017 --rc genhtml_function_coverage=1 00:31:15.017 --rc genhtml_legend=1 00:31:15.017 --rc geninfo_all_blocks=1 00:31:15.017 --rc geninfo_unexecuted_blocks=1 00:31:15.017 00:31:15.017 ' 00:31:15.017 17:10:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:31:15.017 17:10:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:31:15.017 17:10:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:31:15.017 17:10:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:31:15.017 17:10:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:31:15.017 17:10:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:31:15.017 17:10:03 -- common/autotest_common.sh@1519 -- # bdfs=() 00:31:15.017 17:10:03 -- common/autotest_common.sh@1519 -- # local bdfs 00:31:15.017 17:10:03 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:31:15.017 17:10:03 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:31:15.017 17:10:03 -- common/autotest_common.sh@1508 -- # bdfs=() 00:31:15.017 17:10:03 -- common/autotest_common.sh@1508 -- # local bdfs 00:31:15.017 17:10:03 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:15.017 17:10:03 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:15.017 17:10:03 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:31:15.017 17:10:03 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:31:15.017 17:10:03 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:31:15.017 17:10:03 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:31:15.017 17:10:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:31:15.017 17:10:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:31:15.017 17:10:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:31:15.017 17:10:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=139923 00:31:15.017 17:10:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:15.017 17:10:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 139923 00:31:15.017 17:10:03 -- common/autotest_common.sh@829 -- # '[' -z 139923 ']' 00:31:15.017 17:10:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.017 17:10:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:15.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.017 17:10:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.017 17:10:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:15.017 17:10:03 -- common/autotest_common.sh@10 -- # set +x 00:31:15.275 [2024-11-05 17:10:03.935317] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:31:15.275 [2024-11-05 17:10:03.935465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139923 ] 00:31:15.275 [2024-11-05 17:10:04.127796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:15.532 [2024-11-05 17:10:04.298681] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:15.532 [2024-11-05 17:10:04.299068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.532 [2024-11-05 17:10:04.299217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:15.532 [2024-11-05 17:10:04.299346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.532 [2024-11-05 17:10:04.299348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:16.904 17:10:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:16.904 17:10:05 -- common/autotest_common.sh@862 -- # return 0 00:31:16.904 17:10:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:31:16.904 17:10:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.904 17:10:05 -- common/autotest_common.sh@10 -- # set +x 00:31:16.904 nvme0n1 00:31:16.904 17:10:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.904 17:10:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:31:16.904 17:10:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_feiaa.txt 00:31:16.904 17:10:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:31:16.904 17:10:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.904 17:10:05 -- common/autotest_common.sh@10 -- # set +x 00:31:16.904 true 00:31:16.904 17:10:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.904 17:10:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:31:16.904 17:10:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730826605 00:31:16.904 17:10:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=139965 00:31:16.904 17:10:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:31:16.904 17:10:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:16.904 17:10:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:19.430 17:10:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.430 17:10:07 -- common/autotest_common.sh@10 -- # set +x 00:31:19.430 [2024-11-05 17:10:07.755368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:19.430 [2024-11-05 17:10:07.755864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.430 [2024-11-05 17:10:07.755961] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:31:19.430 [2024-11-05 17:10:07.755994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.430 [2024-11-05 17:10:07.758011] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:19.430 17:10:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.430 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 139965 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 139965 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 139965 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.430 17:10:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.430 17:10:07 -- common/autotest_common.sh@10 -- # set +x 00:31:19.430 17:10:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_feiaa.txt 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:31:19.430 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:31:19.431 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:19.431 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:19.431 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:19.431 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:19.431 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:19.431 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:19.431 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:31:19.431 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:31:19.431 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_feiaa.txt 00:31:19.431 17:10:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 139923 00:31:19.431 17:10:07 -- common/autotest_common.sh@936 -- # '[' -z 139923 ']' 00:31:19.431 17:10:07 -- common/autotest_common.sh@940 -- # kill -0 139923 00:31:19.431 17:10:07 -- common/autotest_common.sh@941 -- # uname 00:31:19.431 17:10:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:19.431 17:10:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139923 00:31:19.431 17:10:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:19.431 17:10:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:19.431 17:10:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 139923' 00:31:19.431 killing process with pid 139923 00:31:19.431 17:10:07 -- common/autotest_common.sh@955 -- # kill 139923 00:31:19.431 17:10:07 -- common/autotest_common.sh@960 -- # wait 139923 00:31:21.328 17:10:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:31:21.328 17:10:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:31:21.328 00:31:21.328 real 0m6.055s 00:31:21.328 user 0m21.678s 00:31:21.328 sys 0m0.683s 00:31:21.328 17:10:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:21.328 17:10:09 -- common/autotest_common.sh@10 -- # set +x 00:31:21.328 ************************************ 00:31:21.328 END TEST bdev_nvme_reset_stuck_adm_cmd 00:31:21.328 ************************************ 00:31:21.328 17:10:09 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:31:21.328 17:10:09 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:31:21.328 17:10:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:21.328 17:10:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:21.328 17:10:09 -- common/autotest_common.sh@10 -- # set +x 00:31:21.328 ************************************ 00:31:21.328 START TEST nvme_fio 00:31:21.328 ************************************ 00:31:21.328 17:10:09 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:31:21.328 17:10:09 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:31:21.328 17:10:09 -- nvme/nvme.sh@32 -- # ran_fio=false 00:31:21.328 17:10:09 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:31:21.328 17:10:09 -- common/autotest_common.sh@1508 -- # bdfs=() 00:31:21.328 17:10:09 -- common/autotest_common.sh@1508 -- # local bdfs 00:31:21.328 17:10:09 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:21.328 17:10:09 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:21.328 17:10:09 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:31:21.329 17:10:09 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:31:21.329 17:10:09 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:31:21.329 17:10:09 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:31:21.329 17:10:09 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:31:21.329 17:10:09 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:21.329 17:10:09 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:21.329 17:10:09 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:21.329 17:10:10 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:21.329 17:10:10 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:21.587 17:10:10 -- nvme/nvme.sh@41 -- # bs=4096 00:31:21.587 17:10:10 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:21.587 17:10:10 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:21.587 17:10:10 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:31:21.587 17:10:10 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:21.587 17:10:10 -- common/autotest_common.sh@1328 -- # local sanitizers 00:31:21.587 17:10:10 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:21.587 17:10:10 -- common/autotest_common.sh@1330 -- # shift 00:31:21.587 17:10:10 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:31:21.587 17:10:10 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:31:21.587 17:10:10 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:21.587 17:10:10 -- common/autotest_common.sh@1334 -- # grep libasan 00:31:21.587 17:10:10 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:31:21.587 17:10:10 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:31:21.587 17:10:10 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:31:21.587 17:10:10 -- common/autotest_common.sh@1336 -- # break 00:31:21.587 17:10:10 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:21.587 17:10:10 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:21.587 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:21.587 fio-3.35 00:31:21.587 Starting 1 thread 00:31:24.873 00:31:24.873 test: (groupid=0, jobs=1): err= 0: pid=140113: Tue Nov 5 17:10:13 2024 00:31:24.873 read: IOPS=15.9k, BW=62.1MiB/s (65.1MB/s)(124MiB/2001msec) 00:31:24.873 slat (nsec): min=3628, max=88715, avg=5913.34, stdev=3842.36 00:31:24.873 clat (usec): min=236, max=9958, avg=3998.52, stdev=282.04 00:31:24.873 lat (usec): min=240, max=10047, avg=4004.43, stdev=282.27 00:31:24.873 clat percentiles (usec): 00:31:24.873 | 1.00th=[ 3490], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3818], 00:31:24.873 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 4047], 00:31:24.873 | 70.00th=[ 4080], 80.00th=[ 4146], 90.00th=[ 4228], 95.00th=[ 4359], 00:31:24.873 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 7242], 99.95th=[ 8586], 00:31:24.873 | 99.99th=[ 9765] 00:31:24.873 bw ( KiB/s): min=60798, max=65384, per=99.61%, avg=63354.00, stdev=2337.81, samples=3 00:31:24.873 iops : min=15199, max=16346, avg=15838.33, stdev=584.73, samples=3 00:31:24.873 write: IOPS=15.9k, BW=62.2MiB/s (65.2MB/s)(124MiB/2001msec); 0 zone resets 00:31:24.873 slat (usec): min=3, max=153, avg= 6.11, stdev= 3.94 00:31:24.873 clat (usec): min=305, max=9845, avg=4015.91, stdev=287.26 00:31:24.873 lat (usec): min=326, max=9869, avg=4022.02, stdev=287.46 00:31:24.873 clat percentiles (usec): 00:31:24.873 | 1.00th=[ 3523], 5.00th=[ 3687], 10.00th=[ 3752], 20.00th=[ 3851], 00:31:24.873 | 30.00th=[ 3916], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4047], 00:31:24.873 | 70.00th=[ 4113], 80.00th=[ 4178], 90.00th=[ 4293], 95.00th=[ 4359], 00:31:24.873 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 7504], 99.95th=[ 8717], 00:31:24.873 | 99.99th=[ 9634] 00:31:24.873 bw ( KiB/s): min=61125, max=64768, per=99.01%, avg=63060.33, stdev=1832.14, samples=3 00:31:24.873 iops : min=15281, max=16192, avg=15765.00, stdev=458.17, samples=3 00:31:24.873 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:31:24.873 lat (msec) : 2=0.05%, 4=50.19%, 10=49.71% 00:31:24.873 cpu : usr=99.85%, sys=0.05%, ctx=15, majf=0, minf=37 00:31:24.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:24.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:24.873 issued rwts: total=31817,31861,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.873 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:24.873 00:31:24.873 Run status group 0 (all jobs): 00:31:24.873 READ: bw=62.1MiB/s (65.1MB/s), 62.1MiB/s-62.1MiB/s (65.1MB/s-65.1MB/s), io=124MiB (130MB), run=2001-2001msec 00:31:24.873 WRITE: bw=62.2MiB/s (65.2MB/s), 62.2MiB/s-62.2MiB/s (65.2MB/s-65.2MB/s), io=124MiB (131MB), run=2001-2001msec 00:31:24.873 ----------------------------------------------------- 00:31:24.873 Suppressions used: 00:31:24.873 count bytes template 00:31:24.873 1 32 /usr/src/fio/parse.c 00:31:24.873 ----------------------------------------------------- 00:31:24.873 00:31:24.873 17:10:13 -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:24.873 17:10:13 -- nvme/nvme.sh@46 -- # true 00:31:24.873 00:31:24.873 real 0m3.988s 00:31:24.873 user 0m3.309s 00:31:24.873 sys 0m0.358s 00:31:24.873 17:10:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:24.873 17:10:13 -- common/autotest_common.sh@10 -- # set +x 00:31:24.873 ************************************ 00:31:24.873 END TEST nvme_fio 00:31:24.873 ************************************ 00:31:25.131 ************************************ 00:31:25.131 END TEST nvme 00:31:25.131 ************************************ 00:31:25.131 00:31:25.131 real 0m48.284s 00:31:25.131 user 2m7.997s 00:31:25.131 sys 0m8.258s 00:31:25.131 17:10:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:25.131 17:10:13 -- common/autotest_common.sh@10 -- # set +x 00:31:25.131 17:10:13 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:31:25.131 17:10:13 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:31:25.131 17:10:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:25.131 17:10:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:25.131 17:10:13 -- common/autotest_common.sh@10 -- # set +x 00:31:25.132 ************************************ 00:31:25.132 START TEST nvme_scc 00:31:25.132 ************************************ 00:31:25.132 17:10:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:31:25.132 * Looking for test storage... 00:31:25.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:25.132 17:10:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:31:25.132 17:10:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:31:25.132 17:10:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:31:25.132 17:10:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:31:25.132 17:10:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:31:25.132 17:10:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:31:25.132 17:10:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:31:25.132 17:10:13 -- scripts/common.sh@335 -- # IFS=.-: 00:31:25.132 17:10:13 -- scripts/common.sh@335 -- # read -ra ver1 00:31:25.132 17:10:13 -- scripts/common.sh@336 -- # IFS=.-: 00:31:25.132 17:10:13 -- scripts/common.sh@336 -- # read -ra ver2 00:31:25.132 17:10:13 -- scripts/common.sh@337 -- # local 'op=<' 00:31:25.132 17:10:13 -- scripts/common.sh@339 -- # ver1_l=2 00:31:25.132 17:10:13 -- scripts/common.sh@340 -- # ver2_l=1 00:31:25.132 17:10:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:31:25.132 17:10:13 -- scripts/common.sh@343 -- # case "$op" in 00:31:25.132 17:10:13 -- scripts/common.sh@344 -- # : 1 00:31:25.132 17:10:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:31:25.132 17:10:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:25.132 17:10:13 -- scripts/common.sh@364 -- # decimal 1 00:31:25.132 17:10:14 -- scripts/common.sh@352 -- # local d=1 00:31:25.132 17:10:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:25.132 17:10:14 -- scripts/common.sh@354 -- # echo 1 00:31:25.132 17:10:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:31:25.132 17:10:14 -- scripts/common.sh@365 -- # decimal 2 00:31:25.132 17:10:14 -- scripts/common.sh@352 -- # local d=2 00:31:25.132 17:10:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:25.132 17:10:14 -- scripts/common.sh@354 -- # echo 2 00:31:25.132 17:10:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:31:25.132 17:10:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:31:25.132 17:10:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:31:25.132 17:10:14 -- scripts/common.sh@367 -- # return 0 00:31:25.132 17:10:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:25.132 17:10:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:31:25.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.132 --rc genhtml_branch_coverage=1 00:31:25.132 --rc genhtml_function_coverage=1 00:31:25.132 --rc genhtml_legend=1 00:31:25.132 --rc geninfo_all_blocks=1 00:31:25.132 --rc geninfo_unexecuted_blocks=1 00:31:25.132 00:31:25.132 ' 00:31:25.132 17:10:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:31:25.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.132 --rc genhtml_branch_coverage=1 00:31:25.132 --rc genhtml_function_coverage=1 00:31:25.132 --rc genhtml_legend=1 00:31:25.132 --rc geninfo_all_blocks=1 00:31:25.132 --rc geninfo_unexecuted_blocks=1 00:31:25.132 00:31:25.132 ' 00:31:25.132 17:10:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:31:25.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.132 --rc genhtml_branch_coverage=1 00:31:25.132 --rc genhtml_function_coverage=1 00:31:25.132 --rc genhtml_legend=1 00:31:25.132 --rc geninfo_all_blocks=1 00:31:25.132 --rc geninfo_unexecuted_blocks=1 00:31:25.132 00:31:25.132 ' 00:31:25.132 17:10:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:31:25.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.132 --rc genhtml_branch_coverage=1 00:31:25.132 --rc genhtml_function_coverage=1 00:31:25.132 --rc genhtml_legend=1 00:31:25.132 --rc geninfo_all_blocks=1 00:31:25.132 --rc geninfo_unexecuted_blocks=1 00:31:25.132 00:31:25.132 ' 00:31:25.132 17:10:14 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:25.132 17:10:14 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:25.132 17:10:14 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:31:25.132 17:10:14 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:25.132 17:10:14 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:25.132 17:10:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.132 17:10:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.132 17:10:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.132 17:10:14 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:25.132 17:10:14 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:25.132 17:10:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:25.132 17:10:14 -- paths/export.sh@5 -- # export PATH 00:31:25.132 17:10:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:25.132 17:10:14 -- nvme/functions.sh@10 -- # ctrls=() 00:31:25.132 17:10:14 -- nvme/functions.sh@10 -- # declare -A ctrls 00:31:25.132 17:10:14 -- nvme/functions.sh@11 -- # nvmes=() 00:31:25.132 17:10:14 -- nvme/functions.sh@11 -- # declare -A nvmes 00:31:25.132 17:10:14 -- nvme/functions.sh@12 -- # bdfs=() 00:31:25.132 17:10:14 -- nvme/functions.sh@12 -- # declare -A bdfs 00:31:25.132 17:10:14 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:31:25.132 17:10:14 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:31:25.132 17:10:14 -- nvme/functions.sh@14 -- # nvme_name= 00:31:25.132 17:10:14 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:25.391 17:10:14 -- nvme/nvme_scc.sh@12 -- # uname 00:31:25.391 17:10:14 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:31:25.391 17:10:14 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:31:25.391 17:10:14 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:25.650 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:25.650 Waiting for block devices as requested 00:31:25.650 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:31:25.650 17:10:14 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:31:25.650 17:10:14 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:31:25.650 17:10:14 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:31:25.650 17:10:14 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:31:25.650 17:10:14 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:31:25.650 17:10:14 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:31:25.650 17:10:14 -- scripts/common.sh@15 -- # local i 00:31:25.650 17:10:14 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:31:25.650 17:10:14 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:31:25.650 17:10:14 -- scripts/common.sh@24 -- # return 0 00:31:25.650 17:10:14 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:31:25.650 17:10:14 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:31:25.650 17:10:14 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:31:25.650 17:10:14 -- nvme/functions.sh@18 -- # shift 00:31:25.650 17:10:14 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.650 17:10:14 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.650 17:10:14 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.650 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.650 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.650 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.650 17:10:14 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.650 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.650 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.650 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.650 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.650 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.650 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.650 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.650 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:31:25.650 17:10:14 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.650 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.911 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:31:25.911 17:10:14 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:31:25.911 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.912 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.912 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.912 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:31:25.913 17:10:14 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:31:25.913 17:10:14 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:31:25.913 17:10:14 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:31:25.913 17:10:14 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@18 -- # shift 00:31:25.913 17:10:14 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.913 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:31:25.913 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.913 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.914 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.914 17:10:14 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:31:25.914 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:31:25.915 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.915 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.915 17:10:14 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:31:25.915 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:31:25.915 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:31:25.915 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.915 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.915 17:10:14 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:31:25.915 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:31:25.915 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:31:25.915 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.915 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.915 17:10:14 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:31:25.915 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:31:25.915 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:31:25.915 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.915 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.915 17:10:14 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:31:25.915 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:31:25.915 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:31:25.915 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.915 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.915 17:10:14 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:31:25.915 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:31:25.915 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:31:25.915 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.915 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.915 17:10:14 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:31:25.915 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:31:25.915 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:31:25.915 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.915 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.915 17:10:14 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:31:25.915 17:10:14 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:31:25.915 17:10:14 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:31:25.915 17:10:14 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.915 17:10:14 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.915 17:10:14 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:31:25.915 17:10:14 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:31:25.915 17:10:14 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:31:25.915 17:10:14 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:31:25.915 17:10:14 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:31:25.915 17:10:14 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:31:25.915 17:10:14 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:31:25.915 17:10:14 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:31:25.915 17:10:14 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:31:25.915 17:10:14 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:31:25.915 17:10:14 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:31:25.915 17:10:14 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:31:25.915 17:10:14 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:31:25.915 17:10:14 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:31:25.915 17:10:14 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:31:25.915 17:10:14 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:31:25.915 17:10:14 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:31:25.915 17:10:14 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:31:25.915 17:10:14 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:31:25.915 17:10:14 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:31:25.915 17:10:14 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:31:25.915 17:10:14 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:31:25.915 17:10:14 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:31:25.915 17:10:14 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:31:25.915 17:10:14 -- nvme/functions.sh@76 -- # echo 0x15d 00:31:25.915 17:10:14 -- nvme/functions.sh@184 -- # oncs=0x15d 00:31:25.915 17:10:14 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:31:25.915 17:10:14 -- nvme/functions.sh@197 -- # echo nvme0 00:31:25.915 17:10:14 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:31:25.915 17:10:14 -- nvme/functions.sh@206 -- # echo nvme0 00:31:25.915 17:10:14 -- nvme/functions.sh@207 -- # return 0 00:31:25.915 17:10:14 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:31:25.915 17:10:14 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:31:25.915 17:10:14 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:26.482 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:26.482 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:31:27.417 17:10:16 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:31:27.417 17:10:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:31:27.417 17:10:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:27.417 17:10:16 -- common/autotest_common.sh@10 -- # set +x 00:31:27.417 ************************************ 00:31:27.417 START TEST nvme_simple_copy 00:31:27.417 ************************************ 00:31:27.417 17:10:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:31:27.984 Initializing NVMe Controllers 00:31:27.984 Attaching to 0000:00:06.0 00:31:27.984 Controller supports SCC. Attached to 0000:00:06.0 00:31:27.984 Namespace ID: 1 size: 5GB 00:31:27.984 Initialization complete. 00:31:27.984 00:31:27.984 Controller QEMU NVMe Ctrl (12340 ) 00:31:27.984 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:31:27.984 Namespace Block Size:4096 00:31:27.984 Writing LBAs 0 to 63 with Random Data 00:31:27.984 Copied LBAs from 0 - 63 to the Destination LBA 256 00:31:27.984 LBAs matching Written Data: 64 00:31:27.984 ************************************ 00:31:27.984 END TEST nvme_simple_copy 00:31:27.984 ************************************ 00:31:27.984 00:31:27.984 real 0m0.308s 00:31:27.984 user 0m0.111s 00:31:27.984 sys 0m0.099s 00:31:27.984 17:10:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:27.984 17:10:16 -- common/autotest_common.sh@10 -- # set +x 00:31:27.984 ************************************ 00:31:27.984 END TEST nvme_scc 00:31:27.984 ************************************ 00:31:27.984 00:31:27.984 real 0m2.823s 00:31:27.984 user 0m0.793s 00:31:27.984 sys 0m1.842s 00:31:27.984 17:10:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:27.984 17:10:16 -- common/autotest_common.sh@10 -- # set +x 00:31:27.984 17:10:16 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:31:27.984 17:10:16 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:31:27.984 17:10:16 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:31:27.984 17:10:16 -- spdk/autotest.sh@225 -- # [[ 0 -eq 1 ]] 00:31:27.984 17:10:16 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:31:27.984 17:10:16 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:31:27.984 17:10:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:27.984 17:10:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:27.984 17:10:16 -- common/autotest_common.sh@10 -- # set +x 00:31:27.984 ************************************ 00:31:27.984 START TEST nvme_rpc 00:31:27.984 ************************************ 00:31:27.984 17:10:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:31:27.984 * Looking for test storage... 00:31:27.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:27.984 17:10:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:31:27.984 17:10:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:31:27.984 17:10:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:31:27.984 17:10:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:31:27.984 17:10:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:31:27.984 17:10:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:31:27.984 17:10:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:31:27.984 17:10:16 -- scripts/common.sh@335 -- # IFS=.-: 00:31:27.984 17:10:16 -- scripts/common.sh@335 -- # read -ra ver1 00:31:27.984 17:10:16 -- scripts/common.sh@336 -- # IFS=.-: 00:31:27.984 17:10:16 -- scripts/common.sh@336 -- # read -ra ver2 00:31:27.984 17:10:16 -- scripts/common.sh@337 -- # local 'op=<' 00:31:27.984 17:10:16 -- scripts/common.sh@339 -- # ver1_l=2 00:31:27.984 17:10:16 -- scripts/common.sh@340 -- # ver2_l=1 00:31:27.984 17:10:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:31:27.984 17:10:16 -- scripts/common.sh@343 -- # case "$op" in 00:31:27.984 17:10:16 -- scripts/common.sh@344 -- # : 1 00:31:27.984 17:10:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:31:27.984 17:10:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:27.984 17:10:16 -- scripts/common.sh@364 -- # decimal 1 00:31:27.984 17:10:16 -- scripts/common.sh@352 -- # local d=1 00:31:27.984 17:10:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:27.984 17:10:16 -- scripts/common.sh@354 -- # echo 1 00:31:27.984 17:10:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:31:27.984 17:10:16 -- scripts/common.sh@365 -- # decimal 2 00:31:27.984 17:10:16 -- scripts/common.sh@352 -- # local d=2 00:31:27.984 17:10:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:27.984 17:10:16 -- scripts/common.sh@354 -- # echo 2 00:31:27.984 17:10:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:31:27.984 17:10:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:31:27.984 17:10:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:31:27.984 17:10:16 -- scripts/common.sh@367 -- # return 0 00:31:27.984 17:10:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:27.984 17:10:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:31:27.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.984 --rc genhtml_branch_coverage=1 00:31:27.984 --rc genhtml_function_coverage=1 00:31:27.984 --rc genhtml_legend=1 00:31:27.984 --rc geninfo_all_blocks=1 00:31:27.984 --rc geninfo_unexecuted_blocks=1 00:31:27.984 00:31:27.984 ' 00:31:27.984 17:10:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:31:27.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.984 --rc genhtml_branch_coverage=1 00:31:27.984 --rc genhtml_function_coverage=1 00:31:27.984 --rc genhtml_legend=1 00:31:27.984 --rc geninfo_all_blocks=1 00:31:27.985 --rc geninfo_unexecuted_blocks=1 00:31:27.985 00:31:27.985 ' 00:31:27.985 17:10:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:31:27.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.985 --rc genhtml_branch_coverage=1 00:31:27.985 --rc genhtml_function_coverage=1 00:31:27.985 --rc genhtml_legend=1 00:31:27.985 --rc geninfo_all_blocks=1 00:31:27.985 --rc geninfo_unexecuted_blocks=1 00:31:27.985 00:31:27.985 ' 00:31:27.985 17:10:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:31:27.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.985 --rc genhtml_branch_coverage=1 00:31:27.985 --rc genhtml_function_coverage=1 00:31:27.985 --rc genhtml_legend=1 00:31:27.985 --rc geninfo_all_blocks=1 00:31:27.985 --rc geninfo_unexecuted_blocks=1 00:31:27.985 00:31:27.985 ' 00:31:27.985 17:10:16 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:27.985 17:10:16 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:31:27.985 17:10:16 -- common/autotest_common.sh@1519 -- # bdfs=() 00:31:27.985 17:10:16 -- common/autotest_common.sh@1519 -- # local bdfs 00:31:27.985 17:10:16 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:31:27.985 17:10:16 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:31:27.985 17:10:16 -- common/autotest_common.sh@1508 -- # bdfs=() 00:31:27.985 17:10:16 -- common/autotest_common.sh@1508 -- # local bdfs 00:31:27.985 17:10:16 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:27.985 17:10:16 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:31:27.985 17:10:16 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:28.243 17:10:16 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:31:28.243 17:10:16 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:31:28.243 17:10:16 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:31:28.243 17:10:16 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:31:28.243 17:10:16 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=140606 00:31:28.243 17:10:16 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:31:28.243 17:10:16 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:31:28.243 17:10:16 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 140606 00:31:28.243 17:10:16 -- common/autotest_common.sh@829 -- # '[' -z 140606 ']' 00:31:28.243 17:10:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.243 17:10:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:28.243 17:10:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.243 17:10:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:28.243 17:10:16 -- common/autotest_common.sh@10 -- # set +x 00:31:28.243 [2024-11-05 17:10:16.998895] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:31:28.243 [2024-11-05 17:10:16.999662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140606 ] 00:31:28.501 [2024-11-05 17:10:17.167561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:28.501 [2024-11-05 17:10:17.332748] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:28.501 [2024-11-05 17:10:17.333141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.501 [2024-11-05 17:10:17.333151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.900 17:10:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:29.900 17:10:18 -- common/autotest_common.sh@862 -- # return 0 00:31:29.900 17:10:18 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:31:30.158 Nvme0n1 00:31:30.158 17:10:18 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:31:30.158 17:10:18 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:31:30.416 request: 00:31:30.416 { 00:31:30.416 "filename": "non_existing_file", 00:31:30.416 "bdev_name": "Nvme0n1", 00:31:30.416 "method": "bdev_nvme_apply_firmware", 00:31:30.416 "req_id": 1 00:31:30.416 } 00:31:30.416 Got JSON-RPC error response 00:31:30.416 response: 00:31:30.416 { 00:31:30.416 "code": -32603, 00:31:30.416 "message": "open file failed." 00:31:30.416 } 00:31:30.416 17:10:19 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:31:30.416 17:10:19 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:31:30.416 17:10:19 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:30.416 17:10:19 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:31:30.416 17:10:19 -- nvme/nvme_rpc.sh@40 -- # killprocess 140606 00:31:30.416 17:10:19 -- common/autotest_common.sh@936 -- # '[' -z 140606 ']' 00:31:30.416 17:10:19 -- common/autotest_common.sh@940 -- # kill -0 140606 00:31:30.416 17:10:19 -- common/autotest_common.sh@941 -- # uname 00:31:30.416 17:10:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:30.416 17:10:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140606 00:31:30.416 17:10:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:30.416 17:10:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:30.416 17:10:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140606' 00:31:30.416 killing process with pid 140606 00:31:30.416 17:10:19 -- common/autotest_common.sh@955 -- # kill 140606 00:31:30.416 17:10:19 -- common/autotest_common.sh@960 -- # wait 140606 00:31:32.312 ************************************ 00:31:32.312 END TEST nvme_rpc 00:31:32.312 ************************************ 00:31:32.312 00:31:32.312 real 0m4.250s 00:31:32.312 user 0m8.166s 00:31:32.312 sys 0m0.612s 00:31:32.312 17:10:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:32.312 17:10:20 -- common/autotest_common.sh@10 -- # set +x 00:31:32.312 17:10:20 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:31:32.312 17:10:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:32.312 17:10:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:32.312 17:10:20 -- common/autotest_common.sh@10 -- # set +x 00:31:32.312 ************************************ 00:31:32.312 START TEST nvme_rpc_timeouts 00:31:32.312 ************************************ 00:31:32.312 17:10:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:31:32.312 * Looking for test storage... 00:31:32.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:32.312 17:10:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:31:32.312 17:10:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:31:32.312 17:10:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:31:32.312 17:10:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:31:32.312 17:10:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:31:32.312 17:10:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:31:32.312 17:10:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:31:32.312 17:10:21 -- scripts/common.sh@335 -- # IFS=.-: 00:31:32.312 17:10:21 -- scripts/common.sh@335 -- # read -ra ver1 00:31:32.312 17:10:21 -- scripts/common.sh@336 -- # IFS=.-: 00:31:32.312 17:10:21 -- scripts/common.sh@336 -- # read -ra ver2 00:31:32.312 17:10:21 -- scripts/common.sh@337 -- # local 'op=<' 00:31:32.312 17:10:21 -- scripts/common.sh@339 -- # ver1_l=2 00:31:32.312 17:10:21 -- scripts/common.sh@340 -- # ver2_l=1 00:31:32.312 17:10:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:31:32.312 17:10:21 -- scripts/common.sh@343 -- # case "$op" in 00:31:32.312 17:10:21 -- scripts/common.sh@344 -- # : 1 00:31:32.312 17:10:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:31:32.312 17:10:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:32.312 17:10:21 -- scripts/common.sh@364 -- # decimal 1 00:31:32.312 17:10:21 -- scripts/common.sh@352 -- # local d=1 00:31:32.312 17:10:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:32.312 17:10:21 -- scripts/common.sh@354 -- # echo 1 00:31:32.312 17:10:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:31:32.313 17:10:21 -- scripts/common.sh@365 -- # decimal 2 00:31:32.313 17:10:21 -- scripts/common.sh@352 -- # local d=2 00:31:32.313 17:10:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:32.313 17:10:21 -- scripts/common.sh@354 -- # echo 2 00:31:32.313 17:10:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:31:32.313 17:10:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:31:32.313 17:10:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:31:32.313 17:10:21 -- scripts/common.sh@367 -- # return 0 00:31:32.313 17:10:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:32.313 17:10:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:31:32.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.313 --rc genhtml_branch_coverage=1 00:31:32.313 --rc genhtml_function_coverage=1 00:31:32.313 --rc genhtml_legend=1 00:31:32.313 --rc geninfo_all_blocks=1 00:31:32.313 --rc geninfo_unexecuted_blocks=1 00:31:32.313 00:31:32.313 ' 00:31:32.313 17:10:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:31:32.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.313 --rc genhtml_branch_coverage=1 00:31:32.313 --rc genhtml_function_coverage=1 00:31:32.313 --rc genhtml_legend=1 00:31:32.313 --rc geninfo_all_blocks=1 00:31:32.313 --rc geninfo_unexecuted_blocks=1 00:31:32.313 00:31:32.313 ' 00:31:32.313 17:10:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:31:32.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.313 --rc genhtml_branch_coverage=1 00:31:32.313 --rc genhtml_function_coverage=1 00:31:32.313 --rc genhtml_legend=1 00:31:32.313 --rc geninfo_all_blocks=1 00:31:32.313 --rc geninfo_unexecuted_blocks=1 00:31:32.313 00:31:32.313 ' 00:31:32.313 17:10:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:31:32.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.313 --rc genhtml_branch_coverage=1 00:31:32.313 --rc genhtml_function_coverage=1 00:31:32.313 --rc genhtml_legend=1 00:31:32.313 --rc geninfo_all_blocks=1 00:31:32.313 --rc geninfo_unexecuted_blocks=1 00:31:32.313 00:31:32.313 ' 00:31:32.313 17:10:21 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:32.313 17:10:21 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_140697 00:31:32.313 17:10:21 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_140697 00:31:32.313 17:10:21 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=140729 00:31:32.313 17:10:21 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:31:32.313 17:10:21 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:31:32.313 17:10:21 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 140729 00:31:32.313 17:10:21 -- common/autotest_common.sh@829 -- # '[' -z 140729 ']' 00:31:32.313 17:10:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.313 17:10:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:32.313 17:10:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.313 17:10:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:32.313 17:10:21 -- common/autotest_common.sh@10 -- # set +x 00:31:32.571 [2024-11-05 17:10:21.221285] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:31:32.571 [2024-11-05 17:10:21.222023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140729 ] 00:31:32.571 [2024-11-05 17:10:21.377291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:32.829 [2024-11-05 17:10:21.542224] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:32.829 [2024-11-05 17:10:21.542808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.829 [2024-11-05 17:10:21.542813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.201 17:10:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:34.201 17:10:22 -- common/autotest_common.sh@862 -- # return 0 00:31:34.201 17:10:22 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:31:34.201 Checking default timeout settings: 00:31:34.201 17:10:22 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:34.458 Making settings changes with rpc: 00:31:34.458 17:10:23 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:31:34.458 17:10:23 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:31:34.716 Check default vs. modified settings: 00:31:34.716 17:10:23 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:31:34.716 17:10:23 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_140697 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_140697 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:31:34.974 Setting action_on_timeout is changed as expected. 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_140697 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_140697 00:31:34.974 17:10:23 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:31:34.975 Setting timeout_us is changed as expected. 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_140697 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_140697 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:31:34.975 Setting timeout_admin_us is changed as expected. 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_140697 /tmp/settings_modified_140697 00:31:34.975 17:10:23 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 140729 00:31:34.975 17:10:23 -- common/autotest_common.sh@936 -- # '[' -z 140729 ']' 00:31:34.975 17:10:23 -- common/autotest_common.sh@940 -- # kill -0 140729 00:31:34.975 17:10:23 -- common/autotest_common.sh@941 -- # uname 00:31:34.975 17:10:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:34.975 17:10:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140729 00:31:34.975 17:10:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:34.975 killing process with pid 140729 00:31:34.975 17:10:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:34.975 17:10:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140729' 00:31:34.975 17:10:23 -- common/autotest_common.sh@955 -- # kill 140729 00:31:34.975 17:10:23 -- common/autotest_common.sh@960 -- # wait 140729 00:31:36.875 RPC TIMEOUT SETTING TEST PASSED. 00:31:36.875 17:10:25 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:31:36.875 ************************************ 00:31:36.875 END TEST nvme_rpc_timeouts 00:31:36.875 ************************************ 00:31:36.875 00:31:36.875 real 0m4.611s 00:31:36.875 user 0m9.119s 00:31:36.875 sys 0m0.674s 00:31:36.875 17:10:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:36.875 17:10:25 -- common/autotest_common.sh@10 -- # set +x 00:31:36.875 17:10:25 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:31:36.875 17:10:25 -- spdk/autotest.sh@242 -- # [[ 0 -eq 1 ]] 00:31:36.875 17:10:25 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:31:36.875 17:10:25 -- spdk/autotest.sh@255 -- # timing_exit lib 00:31:36.875 17:10:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:36.875 17:10:25 -- common/autotest_common.sh@10 -- # set +x 00:31:36.875 17:10:25 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:31:36.875 17:10:25 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:31:36.875 17:10:25 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:31:36.875 17:10:25 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:31:36.875 17:10:25 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:31:36.875 17:10:25 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:31:36.875 17:10:25 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:31:36.875 17:10:25 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:31:36.875 17:10:25 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:31:36.875 17:10:25 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:31:36.875 17:10:25 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:31:36.875 17:10:25 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:31:36.875 17:10:25 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:36.876 17:10:25 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:36.876 17:10:25 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:31:36.876 17:10:25 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:31:36.876 17:10:25 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:31:36.876 17:10:25 -- spdk/autotest.sh@365 -- # [[ 1 -eq 1 ]] 00:31:36.876 17:10:25 -- spdk/autotest.sh@366 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:31:36.876 17:10:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:36.876 17:10:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:36.876 17:10:25 -- common/autotest_common.sh@10 -- # set +x 00:31:36.876 ************************************ 00:31:36.876 START TEST blockdev_raid5f 00:31:36.876 ************************************ 00:31:36.876 17:10:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:31:37.134 * Looking for test storage... 00:31:37.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:31:37.134 17:10:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:31:37.134 17:10:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:31:37.134 17:10:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:31:37.134 17:10:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:31:37.135 17:10:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:31:37.135 17:10:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:31:37.135 17:10:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:31:37.135 17:10:25 -- scripts/common.sh@335 -- # IFS=.-: 00:31:37.135 17:10:25 -- scripts/common.sh@335 -- # read -ra ver1 00:31:37.135 17:10:25 -- scripts/common.sh@336 -- # IFS=.-: 00:31:37.135 17:10:25 -- scripts/common.sh@336 -- # read -ra ver2 00:31:37.135 17:10:25 -- scripts/common.sh@337 -- # local 'op=<' 00:31:37.135 17:10:25 -- scripts/common.sh@339 -- # ver1_l=2 00:31:37.135 17:10:25 -- scripts/common.sh@340 -- # ver2_l=1 00:31:37.135 17:10:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:31:37.135 17:10:25 -- scripts/common.sh@343 -- # case "$op" in 00:31:37.135 17:10:25 -- scripts/common.sh@344 -- # : 1 00:31:37.135 17:10:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:31:37.135 17:10:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:37.135 17:10:25 -- scripts/common.sh@364 -- # decimal 1 00:31:37.135 17:10:25 -- scripts/common.sh@352 -- # local d=1 00:31:37.135 17:10:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:37.135 17:10:25 -- scripts/common.sh@354 -- # echo 1 00:31:37.135 17:10:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:31:37.135 17:10:25 -- scripts/common.sh@365 -- # decimal 2 00:31:37.135 17:10:25 -- scripts/common.sh@352 -- # local d=2 00:31:37.135 17:10:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:37.135 17:10:25 -- scripts/common.sh@354 -- # echo 2 00:31:37.135 17:10:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:31:37.135 17:10:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:31:37.135 17:10:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:31:37.135 17:10:25 -- scripts/common.sh@367 -- # return 0 00:31:37.135 17:10:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:37.135 17:10:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:31:37.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.135 --rc genhtml_branch_coverage=1 00:31:37.135 --rc genhtml_function_coverage=1 00:31:37.135 --rc genhtml_legend=1 00:31:37.135 --rc geninfo_all_blocks=1 00:31:37.135 --rc geninfo_unexecuted_blocks=1 00:31:37.135 00:31:37.135 ' 00:31:37.135 17:10:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:31:37.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.135 --rc genhtml_branch_coverage=1 00:31:37.135 --rc genhtml_function_coverage=1 00:31:37.135 --rc genhtml_legend=1 00:31:37.135 --rc geninfo_all_blocks=1 00:31:37.135 --rc geninfo_unexecuted_blocks=1 00:31:37.135 00:31:37.135 ' 00:31:37.135 17:10:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:31:37.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.135 --rc genhtml_branch_coverage=1 00:31:37.135 --rc genhtml_function_coverage=1 00:31:37.135 --rc genhtml_legend=1 00:31:37.135 --rc geninfo_all_blocks=1 00:31:37.135 --rc geninfo_unexecuted_blocks=1 00:31:37.135 00:31:37.135 ' 00:31:37.135 17:10:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:31:37.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.135 --rc genhtml_branch_coverage=1 00:31:37.135 --rc genhtml_function_coverage=1 00:31:37.135 --rc genhtml_legend=1 00:31:37.135 --rc geninfo_all_blocks=1 00:31:37.135 --rc geninfo_unexecuted_blocks=1 00:31:37.135 00:31:37.135 ' 00:31:37.135 17:10:25 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:37.135 17:10:25 -- bdev/nbd_common.sh@6 -- # set -e 00:31:37.135 17:10:25 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:37.135 17:10:25 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:37.135 17:10:25 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:31:37.135 17:10:25 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:31:37.135 17:10:25 -- bdev/blockdev.sh@18 -- # : 00:31:37.135 17:10:25 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:31:37.135 17:10:25 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:31:37.135 17:10:25 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:31:37.135 17:10:25 -- bdev/blockdev.sh@672 -- # uname -s 00:31:37.135 17:10:25 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:31:37.135 17:10:25 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:31:37.135 17:10:25 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:31:37.135 17:10:25 -- bdev/blockdev.sh@681 -- # crypto_device= 00:31:37.135 17:10:25 -- bdev/blockdev.sh@682 -- # dek= 00:31:37.135 17:10:25 -- bdev/blockdev.sh@683 -- # env_ctx= 00:31:37.135 17:10:25 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:31:37.135 17:10:25 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:31:37.135 17:10:25 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:31:37.135 17:10:25 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:31:37.135 17:10:25 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:31:37.135 17:10:25 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=140893 00:31:37.135 17:10:25 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:37.135 17:10:25 -- bdev/blockdev.sh@47 -- # waitforlisten 140893 00:31:37.135 17:10:25 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:31:37.135 17:10:25 -- common/autotest_common.sh@829 -- # '[' -z 140893 ']' 00:31:37.135 17:10:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.135 17:10:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:37.135 17:10:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.135 17:10:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:37.135 17:10:25 -- common/autotest_common.sh@10 -- # set +x 00:31:37.135 [2024-11-05 17:10:25.936449] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:31:37.135 [2024-11-05 17:10:25.936616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140893 ] 00:31:37.394 [2024-11-05 17:10:26.081802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.394 [2024-11-05 17:10:26.258316] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:37.394 [2024-11-05 17:10:26.258536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.770 17:10:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:38.770 17:10:27 -- common/autotest_common.sh@862 -- # return 0 00:31:38.770 17:10:27 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:31:38.770 17:10:27 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:31:38.770 17:10:27 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:31:38.771 17:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.771 17:10:27 -- common/autotest_common.sh@10 -- # set +x 00:31:38.771 Malloc0 00:31:38.771 Malloc1 00:31:39.029 Malloc2 00:31:39.029 17:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.029 17:10:27 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:31:39.029 17:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.029 17:10:27 -- common/autotest_common.sh@10 -- # set +x 00:31:39.029 17:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.029 17:10:27 -- bdev/blockdev.sh@738 -- # cat 00:31:39.029 17:10:27 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:31:39.029 17:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.029 17:10:27 -- common/autotest_common.sh@10 -- # set +x 00:31:39.029 17:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.029 17:10:27 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:31:39.029 17:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.029 17:10:27 -- common/autotest_common.sh@10 -- # set +x 00:31:39.029 17:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.029 17:10:27 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:39.029 17:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.029 17:10:27 -- common/autotest_common.sh@10 -- # set +x 00:31:39.029 17:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.029 17:10:27 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:31:39.029 17:10:27 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:31:39.029 17:10:27 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:31:39.029 17:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.029 17:10:27 -- common/autotest_common.sh@10 -- # set +x 00:31:39.029 17:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.029 17:10:27 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:31:39.029 17:10:27 -- bdev/blockdev.sh@747 -- # jq -r .name 00:31:39.029 17:10:27 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "79257110-9a18-44dd-a730-26a66cbc9112"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "79257110-9a18-44dd-a730-26a66cbc9112",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "79257110-9a18-44dd-a730-26a66cbc9112",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "06cb94e1-2f0d-4331-ad52-88607685ef5e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ae0bc785-6fc5-4014-aecd-15c008288dd5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b525fc6e-035c-4eed-83d1-6dddaa489053",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:31:39.029 17:10:27 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:31:39.029 17:10:27 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:31:39.029 17:10:27 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:31:39.029 17:10:27 -- bdev/blockdev.sh@752 -- # killprocess 140893 00:31:39.029 17:10:27 -- common/autotest_common.sh@936 -- # '[' -z 140893 ']' 00:31:39.029 17:10:27 -- common/autotest_common.sh@940 -- # kill -0 140893 00:31:39.029 17:10:27 -- common/autotest_common.sh@941 -- # uname 00:31:39.029 17:10:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:39.029 17:10:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140893 00:31:39.029 17:10:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:39.029 17:10:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:39.029 killing process with pid 140893 00:31:39.029 17:10:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140893' 00:31:39.029 17:10:27 -- common/autotest_common.sh@955 -- # kill 140893 00:31:39.029 17:10:27 -- common/autotest_common.sh@960 -- # wait 140893 00:31:40.931 17:10:29 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:40.931 17:10:29 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:31:40.931 17:10:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:31:40.931 17:10:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:40.931 17:10:29 -- common/autotest_common.sh@10 -- # set +x 00:31:41.189 ************************************ 00:31:41.189 START TEST bdev_hello_world 00:31:41.189 ************************************ 00:31:41.189 17:10:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:31:41.189 [2024-11-05 17:10:29.899375] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:31:41.189 [2024-11-05 17:10:29.900153] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140969 ] 00:31:41.189 [2024-11-05 17:10:30.068870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.447 [2024-11-05 17:10:30.247166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.014 [2024-11-05 17:10:30.688004] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:42.014 [2024-11-05 17:10:30.688093] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:31:42.015 [2024-11-05 17:10:30.688131] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:42.015 [2024-11-05 17:10:30.688625] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:42.015 [2024-11-05 17:10:30.688756] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:42.015 [2024-11-05 17:10:30.688786] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:42.015 [2024-11-05 17:10:30.688858] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:42.015 00:31:42.015 [2024-11-05 17:10:30.688896] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:42.951 00:31:42.951 real 0m1.955s 00:31:42.951 user 0m1.566s 00:31:42.951 sys 0m0.268s 00:31:42.951 17:10:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:42.951 17:10:31 -- common/autotest_common.sh@10 -- # set +x 00:31:42.951 ************************************ 00:31:42.951 END TEST bdev_hello_world 00:31:42.951 ************************************ 00:31:42.951 17:10:31 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:31:42.951 17:10:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:42.951 17:10:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:42.951 17:10:31 -- common/autotest_common.sh@10 -- # set +x 00:31:42.951 ************************************ 00:31:42.951 START TEST bdev_bounds 00:31:42.951 ************************************ 00:31:42.951 17:10:31 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:31:42.951 17:10:31 -- bdev/blockdev.sh@288 -- # bdevio_pid=141019 00:31:42.951 17:10:31 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:42.951 Process bdevio pid: 141019 00:31:42.951 17:10:31 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 141019' 00:31:42.951 17:10:31 -- bdev/blockdev.sh@291 -- # waitforlisten 141019 00:31:42.951 17:10:31 -- common/autotest_common.sh@829 -- # '[' -z 141019 ']' 00:31:42.951 17:10:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.951 17:10:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:42.951 17:10:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.951 17:10:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:42.951 17:10:31 -- common/autotest_common.sh@10 -- # set +x 00:31:42.952 17:10:31 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:43.210 [2024-11-05 17:10:31.911832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:31:43.210 [2024-11-05 17:10:31.912268] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141019 ] 00:31:43.210 [2024-11-05 17:10:32.092834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:43.468 [2024-11-05 17:10:32.251261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.468 [2024-11-05 17:10:32.251399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:43.468 [2024-11-05 17:10:32.251415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.842 17:10:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:44.842 17:10:33 -- common/autotest_common.sh@862 -- # return 0 00:31:44.842 17:10:33 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:44.842 I/O targets: 00:31:44.842 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:31:44.842 00:31:44.842 00:31:44.842 CUnit - A unit testing framework for C - Version 2.1-3 00:31:44.842 http://cunit.sourceforge.net/ 00:31:44.842 00:31:44.842 00:31:44.842 Suite: bdevio tests on: raid5f 00:31:44.842 Test: blockdev write read block ...passed 00:31:44.842 Test: blockdev write zeroes read block ...passed 00:31:44.842 Test: blockdev write zeroes read no split ...passed 00:31:44.842 Test: blockdev write zeroes read split ...passed 00:31:45.100 Test: blockdev write zeroes read split partial ...passed 00:31:45.100 Test: blockdev reset ...passed 00:31:45.100 Test: blockdev write read 8 blocks ...passed 00:31:45.100 Test: blockdev write read size > 128k ...passed 00:31:45.100 Test: blockdev write read invalid size ...passed 00:31:45.100 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:45.100 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:45.100 Test: blockdev write read max offset ...passed 00:31:45.100 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:45.100 Test: blockdev writev readv 8 blocks ...passed 00:31:45.100 Test: blockdev writev readv 30 x 1block ...passed 00:31:45.100 Test: blockdev writev readv block ...passed 00:31:45.100 Test: blockdev writev readv size > 128k ...passed 00:31:45.100 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:45.100 Test: blockdev comparev and writev ...passed 00:31:45.100 Test: blockdev nvme passthru rw ...passed 00:31:45.100 Test: blockdev nvme passthru vendor specific ...passed 00:31:45.100 Test: blockdev nvme admin passthru ...passed 00:31:45.100 Test: blockdev copy ...passed 00:31:45.100 00:31:45.100 Run Summary: Type Total Ran Passed Failed Inactive 00:31:45.100 suites 1 1 n/a 0 0 00:31:45.100 tests 23 23 23 0 0 00:31:45.100 asserts 130 130 130 0 n/a 00:31:45.100 00:31:45.100 Elapsed time = 0.445 seconds 00:31:45.100 0 00:31:45.100 17:10:33 -- bdev/blockdev.sh@293 -- # killprocess 141019 00:31:45.100 17:10:33 -- common/autotest_common.sh@936 -- # '[' -z 141019 ']' 00:31:45.100 17:10:33 -- common/autotest_common.sh@940 -- # kill -0 141019 00:31:45.100 17:10:33 -- common/autotest_common.sh@941 -- # uname 00:31:45.100 17:10:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:45.100 17:10:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141019 00:31:45.100 17:10:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:45.100 17:10:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:45.100 17:10:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 141019' 00:31:45.100 killing process with pid 141019 00:31:45.100 17:10:33 -- common/autotest_common.sh@955 -- # kill 141019 00:31:45.100 17:10:33 -- common/autotest_common.sh@960 -- # wait 141019 00:31:46.479 17:10:35 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:31:46.479 00:31:46.479 real 0m3.251s 00:31:46.479 user 0m8.330s 00:31:46.479 sys 0m0.367s 00:31:46.479 17:10:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:46.479 17:10:35 -- common/autotest_common.sh@10 -- # set +x 00:31:46.479 ************************************ 00:31:46.479 END TEST bdev_bounds 00:31:46.479 ************************************ 00:31:46.479 17:10:35 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:31:46.479 17:10:35 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:31:46.479 17:10:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:46.479 17:10:35 -- common/autotest_common.sh@10 -- # set +x 00:31:46.479 ************************************ 00:31:46.479 START TEST bdev_nbd 00:31:46.479 ************************************ 00:31:46.479 17:10:35 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:31:46.479 17:10:35 -- bdev/blockdev.sh@298 -- # uname -s 00:31:46.479 17:10:35 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:31:46.479 17:10:35 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:46.479 17:10:35 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:46.479 17:10:35 -- bdev/blockdev.sh@302 -- # bdev_all=('raid5f') 00:31:46.479 17:10:35 -- bdev/blockdev.sh@302 -- # local bdev_all 00:31:46.479 17:10:35 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:31:46.479 17:10:35 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:31:46.479 17:10:35 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:31:46.479 17:10:35 -- bdev/blockdev.sh@309 -- # local nbd_all 00:31:46.479 17:10:35 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:31:46.479 17:10:35 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:31:46.479 17:10:35 -- bdev/blockdev.sh@312 -- # local nbd_list 00:31:46.479 17:10:35 -- bdev/blockdev.sh@313 -- # bdev_list=('raid5f') 00:31:46.479 17:10:35 -- bdev/blockdev.sh@313 -- # local bdev_list 00:31:46.479 17:10:35 -- bdev/blockdev.sh@316 -- # nbd_pid=141097 00:31:46.479 17:10:35 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:46.479 17:10:35 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:46.479 17:10:35 -- bdev/blockdev.sh@318 -- # waitforlisten 141097 /var/tmp/spdk-nbd.sock 00:31:46.479 17:10:35 -- common/autotest_common.sh@829 -- # '[' -z 141097 ']' 00:31:46.479 17:10:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:46.479 17:10:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:46.479 17:10:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:46.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:46.479 17:10:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:46.479 17:10:35 -- common/autotest_common.sh@10 -- # set +x 00:31:46.479 [2024-11-05 17:10:35.199878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:31:46.479 [2024-11-05 17:10:35.200031] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.479 [2024-11-05 17:10:35.355381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.744 [2024-11-05 17:10:35.553868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.312 17:10:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:47.312 17:10:36 -- common/autotest_common.sh@862 -- # return 0 00:31:47.312 17:10:36 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:31:47.312 17:10:36 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:47.312 17:10:36 -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:31:47.312 17:10:36 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:47.312 17:10:36 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:31:47.312 17:10:36 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:47.312 17:10:36 -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:31:47.312 17:10:36 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:47.312 17:10:36 -- bdev/nbd_common.sh@24 -- # local i 00:31:47.312 17:10:36 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:47.312 17:10:36 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:47.312 17:10:36 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:47.312 17:10:36 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:31:47.571 17:10:36 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:47.571 17:10:36 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:47.571 17:10:36 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:47.571 17:10:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:47.571 17:10:36 -- common/autotest_common.sh@867 -- # local i 00:31:47.571 17:10:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:47.571 17:10:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:47.571 17:10:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:47.571 17:10:36 -- common/autotest_common.sh@871 -- # break 00:31:47.571 17:10:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:47.571 17:10:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:47.571 17:10:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:47.571 1+0 records in 00:31:47.571 1+0 records out 00:31:47.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000985101 s, 4.2 MB/s 00:31:47.571 17:10:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:47.571 17:10:36 -- common/autotest_common.sh@884 -- # size=4096 00:31:47.571 17:10:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:47.571 17:10:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:47.571 17:10:36 -- common/autotest_common.sh@887 -- # return 0 00:31:47.571 17:10:36 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:47.571 17:10:36 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:47.571 17:10:36 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:47.829 17:10:36 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:47.829 { 00:31:47.829 "nbd_device": "/dev/nbd0", 00:31:47.829 "bdev_name": "raid5f" 00:31:47.829 } 00:31:47.829 ]' 00:31:47.829 17:10:36 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:47.829 17:10:36 -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:47.829 { 00:31:47.829 "nbd_device": "/dev/nbd0", 00:31:47.829 "bdev_name": "raid5f" 00:31:47.829 } 00:31:47.829 ]' 00:31:47.829 17:10:36 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:47.829 17:10:36 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:47.829 17:10:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:47.829 17:10:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:47.829 17:10:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:47.829 17:10:36 -- bdev/nbd_common.sh@51 -- # local i 00:31:47.829 17:10:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:47.829 17:10:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:48.088 17:10:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:48.088 17:10:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:48.088 17:10:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:48.088 17:10:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:48.088 17:10:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:48.088 17:10:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:48.088 17:10:36 -- bdev/nbd_common.sh@41 -- # break 00:31:48.088 17:10:36 -- bdev/nbd_common.sh@45 -- # return 0 00:31:48.088 17:10:36 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:48.088 17:10:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:48.088 17:10:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@65 -- # true 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@65 -- # count=0 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@122 -- # count=0 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@127 -- # return 0 00:31:48.347 17:10:37 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@12 -- # local i 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:48.347 17:10:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:31:48.606 /dev/nbd0 00:31:48.606 17:10:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:48.606 17:10:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:48.606 17:10:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:48.606 17:10:37 -- common/autotest_common.sh@867 -- # local i 00:31:48.606 17:10:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:48.606 17:10:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:48.606 17:10:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:48.606 17:10:37 -- common/autotest_common.sh@871 -- # break 00:31:48.606 17:10:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:48.606 17:10:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:48.606 17:10:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:48.606 1+0 records in 00:31:48.606 1+0 records out 00:31:48.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318846 s, 12.8 MB/s 00:31:48.606 17:10:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.606 17:10:37 -- common/autotest_common.sh@884 -- # size=4096 00:31:48.606 17:10:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.606 17:10:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:48.606 17:10:37 -- common/autotest_common.sh@887 -- # return 0 00:31:48.606 17:10:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:48.606 17:10:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:48.606 17:10:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:48.606 17:10:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:48.606 17:10:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:48.864 17:10:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:48.864 { 00:31:48.864 "nbd_device": "/dev/nbd0", 00:31:48.864 "bdev_name": "raid5f" 00:31:48.864 } 00:31:48.865 ]' 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:48.865 { 00:31:48.865 "nbd_device": "/dev/nbd0", 00:31:48.865 "bdev_name": "raid5f" 00:31:48.865 } 00:31:48.865 ]' 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@65 -- # count=1 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@66 -- # echo 1 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@95 -- # count=1 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:48.865 256+0 records in 00:31:48.865 256+0 records out 00:31:48.865 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.007553 s, 139 MB/s 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:48.865 256+0 records in 00:31:48.865 256+0 records out 00:31:48.865 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280153 s, 37.4 MB/s 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@51 -- # local i 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:48.865 17:10:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:49.123 17:10:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:49.123 17:10:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:49.123 17:10:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:49.123 17:10:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:49.123 17:10:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:49.123 17:10:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:49.123 17:10:37 -- bdev/nbd_common.sh@41 -- # break 00:31:49.123 17:10:37 -- bdev/nbd_common.sh@45 -- # return 0 00:31:49.123 17:10:37 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:49.123 17:10:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:49.123 17:10:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:49.382 17:10:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:49.382 17:10:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:49.382 17:10:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:49.382 17:10:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:49.382 17:10:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:49.382 17:10:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:49.382 17:10:38 -- bdev/nbd_common.sh@65 -- # true 00:31:49.382 17:10:38 -- bdev/nbd_common.sh@65 -- # count=0 00:31:49.382 17:10:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:49.382 17:10:38 -- bdev/nbd_common.sh@104 -- # count=0 00:31:49.382 17:10:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:49.382 17:10:38 -- bdev/nbd_common.sh@109 -- # return 0 00:31:49.382 17:10:38 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:49.382 17:10:38 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:49.382 17:10:38 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:31:49.382 17:10:38 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:31:49.382 17:10:38 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:31:49.382 17:10:38 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:49.641 malloc_lvol_verify 00:31:49.641 17:10:38 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:49.899 391dfc87-553c-44b3-9a1b-7d991c00b0bf 00:31:49.899 17:10:38 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:50.158 aac4cdad-4dc2-4f76-8f72-3e6d5dd80009 00:31:50.158 17:10:38 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:50.416 /dev/nbd0 00:31:50.416 17:10:39 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:31:50.416 mke2fs 1.46.5 (30-Dec-2021) 00:31:50.416 00:31:50.416 Filesystem too small for a journal 00:31:50.416 Discarding device blocks: 0/1024 done 00:31:50.416 Creating filesystem with 1024 4k blocks and 1024 inodes 00:31:50.416 00:31:50.416 Allocating group tables: 0/1 done 00:31:50.416 Writing inode tables: 0/1 done 00:31:50.416 Writing superblocks and filesystem accounting information: 0/1 done 00:31:50.416 00:31:50.416 17:10:39 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:31:50.416 17:10:39 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:50.416 17:10:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:50.416 17:10:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:50.416 17:10:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:50.416 17:10:39 -- bdev/nbd_common.sh@51 -- # local i 00:31:50.417 17:10:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:50.417 17:10:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:50.417 17:10:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:50.417 17:10:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:50.417 17:10:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:50.417 17:10:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:50.417 17:10:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:50.417 17:10:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:50.417 17:10:39 -- bdev/nbd_common.sh@41 -- # break 00:31:50.417 17:10:39 -- bdev/nbd_common.sh@45 -- # return 0 00:31:50.417 17:10:39 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:31:50.417 17:10:39 -- bdev/nbd_common.sh@147 -- # return 0 00:31:50.417 17:10:39 -- bdev/blockdev.sh@324 -- # killprocess 141097 00:31:50.417 17:10:39 -- common/autotest_common.sh@936 -- # '[' -z 141097 ']' 00:31:50.417 17:10:39 -- common/autotest_common.sh@940 -- # kill -0 141097 00:31:50.417 17:10:39 -- common/autotest_common.sh@941 -- # uname 00:31:50.675 17:10:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:50.675 17:10:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141097 00:31:50.675 17:10:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:50.675 17:10:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:50.675 17:10:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 141097' 00:31:50.675 killing process with pid 141097 00:31:50.675 17:10:39 -- common/autotest_common.sh@955 -- # kill 141097 00:31:50.675 17:10:39 -- common/autotest_common.sh@960 -- # wait 141097 00:31:52.065 17:10:40 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:31:52.065 00:31:52.065 real 0m5.467s 00:31:52.065 user 0m7.664s 00:31:52.065 sys 0m1.142s 00:31:52.065 17:10:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:52.065 17:10:40 -- common/autotest_common.sh@10 -- # set +x 00:31:52.065 ************************************ 00:31:52.065 END TEST bdev_nbd 00:31:52.065 ************************************ 00:31:52.065 17:10:40 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:31:52.065 17:10:40 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:31:52.065 17:10:40 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:31:52.065 17:10:40 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:31:52.065 17:10:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:52.065 17:10:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:52.065 17:10:40 -- common/autotest_common.sh@10 -- # set +x 00:31:52.065 ************************************ 00:31:52.065 START TEST bdev_fio 00:31:52.065 ************************************ 00:31:52.065 17:10:40 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:31:52.065 17:10:40 -- bdev/blockdev.sh@329 -- # local env_context 00:31:52.065 17:10:40 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:31:52.065 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:31:52.065 17:10:40 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:31:52.065 17:10:40 -- bdev/blockdev.sh@337 -- # echo '' 00:31:52.065 17:10:40 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:31:52.065 17:10:40 -- bdev/blockdev.sh@337 -- # env_context= 00:31:52.065 17:10:40 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:31:52.065 17:10:40 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:52.065 17:10:40 -- common/autotest_common.sh@1270 -- # local workload=verify 00:31:52.065 17:10:40 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:31:52.065 17:10:40 -- common/autotest_common.sh@1272 -- # local env_context= 00:31:52.065 17:10:40 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:31:52.065 17:10:40 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:31:52.065 17:10:40 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:31:52.065 17:10:40 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:31:52.065 17:10:40 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:52.065 17:10:40 -- common/autotest_common.sh@1290 -- # cat 00:31:52.065 17:10:40 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:31:52.065 17:10:40 -- common/autotest_common.sh@1303 -- # cat 00:31:52.065 17:10:40 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:31:52.065 17:10:40 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:31:52.065 17:10:40 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:31:52.065 17:10:40 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:31:52.065 17:10:40 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:31:52.065 17:10:40 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:31:52.065 17:10:40 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:31:52.065 17:10:40 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:31:52.065 17:10:40 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:52.065 17:10:40 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:31:52.065 17:10:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:52.065 17:10:40 -- common/autotest_common.sh@10 -- # set +x 00:31:52.065 ************************************ 00:31:52.065 START TEST bdev_fio_rw_verify 00:31:52.065 ************************************ 00:31:52.065 17:10:40 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:52.065 17:10:40 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:52.065 17:10:40 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:31:52.065 17:10:40 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:52.065 17:10:40 -- common/autotest_common.sh@1328 -- # local sanitizers 00:31:52.066 17:10:40 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:52.066 17:10:40 -- common/autotest_common.sh@1330 -- # shift 00:31:52.066 17:10:40 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:31:52.066 17:10:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:31:52.066 17:10:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:52.066 17:10:40 -- common/autotest_common.sh@1334 -- # grep libasan 00:31:52.066 17:10:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:31:52.066 17:10:40 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:31:52.066 17:10:40 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:31:52.066 17:10:40 -- common/autotest_common.sh@1336 -- # break 00:31:52.066 17:10:40 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:52.066 17:10:40 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:52.066 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:52.066 fio-3.35 00:31:52.066 Starting 1 thread 00:32:04.271 00:32:04.271 job_raid5f: (groupid=0, jobs=1): err= 0: pid=141328: Tue Nov 5 17:10:51 2024 00:32:04.271 read: IOPS=12.0k, BW=47.0MiB/s (49.3MB/s)(470MiB/10001msec) 00:32:04.271 slat (usec): min=18, max=323, avg=19.76, stdev= 4.22 00:32:04.271 clat (usec): min=11, max=744, avg=132.63, stdev=48.72 00:32:04.271 lat (usec): min=32, max=767, avg=152.40, stdev=49.60 00:32:04.271 clat percentiles (usec): 00:32:04.271 | 50.000th=[ 139], 99.000th=[ 237], 99.900th=[ 347], 99.990th=[ 627], 00:32:04.271 | 99.999th=[ 725] 00:32:04.271 write: IOPS=12.6k, BW=49.3MiB/s (51.7MB/s)(487MiB/9874msec); 0 zone resets 00:32:04.271 slat (usec): min=8, max=401, avg=17.05, stdev= 4.57 00:32:04.271 clat (usec): min=59, max=1174, avg=303.84, stdev=48.03 00:32:04.271 lat (usec): min=75, max=1221, avg=320.89, stdev=49.62 00:32:04.271 clat percentiles (usec): 00:32:04.271 | 50.000th=[ 306], 99.000th=[ 474], 99.900th=[ 709], 99.990th=[ 938], 00:32:04.271 | 99.999th=[ 1139] 00:32:04.271 bw ( KiB/s): min=43416, max=53480, per=98.56%, avg=49798.32, stdev=2225.94, samples=19 00:32:04.271 iops : min=10854, max=13370, avg=12449.58, stdev=556.48, samples=19 00:32:04.271 lat (usec) : 20=0.01%, 50=0.01%, 100=16.02%, 250=38.21%, 500=45.41% 00:32:04.271 lat (usec) : 750=0.33%, 1000=0.03% 00:32:04.271 lat (msec) : 2=0.01% 00:32:04.271 cpu : usr=99.14%, sys=0.77%, ctx=149, majf=0, minf=8565 00:32:04.271 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.271 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.271 issued rwts: total=120331,124728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.271 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:04.271 00:32:04.271 Run status group 0 (all jobs): 00:32:04.271 READ: bw=47.0MiB/s (49.3MB/s), 47.0MiB/s-47.0MiB/s (49.3MB/s-49.3MB/s), io=470MiB (493MB), run=10001-10001msec 00:32:04.271 WRITE: bw=49.3MiB/s (51.7MB/s), 49.3MiB/s-49.3MiB/s (51.7MB/s-51.7MB/s), io=487MiB (511MB), run=9874-9874msec 00:32:04.271 ----------------------------------------------------- 00:32:04.271 Suppressions used: 00:32:04.271 count bytes template 00:32:04.271 1 7 /usr/src/fio/parse.c 00:32:04.271 666 63936 /usr/src/fio/iolog.c 00:32:04.271 1 904 libcrypto.so 00:32:04.271 ----------------------------------------------------- 00:32:04.271 00:32:04.271 00:32:04.271 real 0m12.354s 00:32:04.271 user 0m12.786s 00:32:04.271 sys 0m0.724s 00:32:04.271 17:10:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:04.271 17:10:53 -- common/autotest_common.sh@10 -- # set +x 00:32:04.271 ************************************ 00:32:04.271 END TEST bdev_fio_rw_verify 00:32:04.271 ************************************ 00:32:04.271 17:10:53 -- bdev/blockdev.sh@348 -- # rm -f 00:32:04.271 17:10:53 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:04.271 17:10:53 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:32:04.271 17:10:53 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:04.271 17:10:53 -- common/autotest_common.sh@1270 -- # local workload=trim 00:32:04.271 17:10:53 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:32:04.271 17:10:53 -- common/autotest_common.sh@1272 -- # local env_context= 00:32:04.271 17:10:53 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:32:04.271 17:10:53 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:32:04.271 17:10:53 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:32:04.271 17:10:53 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:32:04.271 17:10:53 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:04.271 17:10:53 -- common/autotest_common.sh@1290 -- # cat 00:32:04.272 17:10:53 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:32:04.272 17:10:53 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:32:04.272 17:10:53 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:32:04.272 17:10:53 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "79257110-9a18-44dd-a730-26a66cbc9112"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "79257110-9a18-44dd-a730-26a66cbc9112",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "79257110-9a18-44dd-a730-26a66cbc9112",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "06cb94e1-2f0d-4331-ad52-88607685ef5e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ae0bc785-6fc5-4014-aecd-15c008288dd5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b525fc6e-035c-4eed-83d1-6dddaa489053",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:32:04.272 17:10:53 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:04.530 17:10:53 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:32:04.530 17:10:53 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:04.530 /home/vagrant/spdk_repo/spdk 00:32:04.530 17:10:53 -- bdev/blockdev.sh@360 -- # popd 00:32:04.530 17:10:53 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:32:04.530 17:10:53 -- bdev/blockdev.sh@362 -- # return 0 00:32:04.530 00:32:04.530 real 0m12.531s 00:32:04.530 user 0m12.896s 00:32:04.530 sys 0m0.791s 00:32:04.530 17:10:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:04.530 17:10:53 -- common/autotest_common.sh@10 -- # set +x 00:32:04.530 ************************************ 00:32:04.530 END TEST bdev_fio 00:32:04.530 ************************************ 00:32:04.530 17:10:53 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:04.530 17:10:53 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:04.530 17:10:53 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:32:04.530 17:10:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:04.530 17:10:53 -- common/autotest_common.sh@10 -- # set +x 00:32:04.530 ************************************ 00:32:04.530 START TEST bdev_verify 00:32:04.530 ************************************ 00:32:04.530 17:10:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:04.530 [2024-11-05 17:10:53.312201] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:32:04.530 [2024-11-05 17:10:53.312405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141504 ] 00:32:04.788 [2024-11-05 17:10:53.484235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:04.788 [2024-11-05 17:10:53.673940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.788 [2024-11-05 17:10:53.673959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.356 Running I/O for 5 seconds... 00:32:10.635 00:32:10.635 Latency(us) 00:32:10.635 [2024-11-05T17:10:59.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.635 [2024-11-05T17:10:59.512Z] Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:10.635 Verification LBA range: start 0x0 length 0x2000 00:32:10.635 raid5f : 5.01 8431.06 32.93 0.00 0.00 24072.92 137.77 19303.33 00:32:10.635 [2024-11-05T17:10:59.512Z] Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:10.635 Verification LBA range: start 0x2000 length 0x2000 00:32:10.635 raid5f : 5.01 8302.03 32.43 0.00 0.00 24443.36 636.74 20137.43 00:32:10.635 [2024-11-05T17:10:59.512Z] =================================================================================================================== 00:32:10.635 [2024-11-05T17:10:59.512Z] Total : 16733.09 65.36 0.00 0.00 24256.71 137.77 20137.43 00:32:11.570 00:32:11.570 real 0m7.182s 00:32:11.570 user 0m13.101s 00:32:11.570 sys 0m0.336s 00:32:11.570 17:11:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:11.570 17:11:00 -- common/autotest_common.sh@10 -- # set +x 00:32:11.570 ************************************ 00:32:11.570 END TEST bdev_verify 00:32:11.570 ************************************ 00:32:11.571 17:11:00 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:11.571 17:11:00 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:32:11.571 17:11:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:11.571 17:11:00 -- common/autotest_common.sh@10 -- # set +x 00:32:11.829 ************************************ 00:32:11.829 START TEST bdev_verify_big_io 00:32:11.829 ************************************ 00:32:11.829 17:11:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:11.829 [2024-11-05 17:11:00.536366] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:32:11.829 [2024-11-05 17:11:00.536726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141607 ] 00:32:11.829 [2024-11-05 17:11:00.693840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:12.088 [2024-11-05 17:11:00.882486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.088 [2024-11-05 17:11:00.882504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.655 Running I/O for 5 seconds... 00:32:17.973 00:32:17.973 Latency(us) 00:32:17.973 [2024-11-05T17:11:06.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.973 [2024-11-05T17:11:06.850Z] Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:17.973 Verification LBA range: start 0x0 length 0x200 00:32:17.973 raid5f : 5.17 619.32 38.71 0.00 0.00 5397403.38 383.53 164912.41 00:32:17.973 [2024-11-05T17:11:06.850Z] Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:17.973 Verification LBA range: start 0x200 length 0x200 00:32:17.973 raid5f : 5.18 599.99 37.50 0.00 0.00 5568518.76 175.01 175398.17 00:32:17.973 [2024-11-05T17:11:06.850Z] =================================================================================================================== 00:32:17.973 [2024-11-05T17:11:06.850Z] Total : 1219.31 76.21 0.00 0.00 5481699.47 175.01 175398.17 00:32:18.908 00:32:18.908 real 0m7.313s 00:32:18.908 user 0m13.407s 00:32:18.908 sys 0m0.336s 00:32:18.908 17:11:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:18.908 17:11:07 -- common/autotest_common.sh@10 -- # set +x 00:32:18.908 ************************************ 00:32:18.908 END TEST bdev_verify_big_io 00:32:18.908 ************************************ 00:32:19.166 17:11:07 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:19.166 17:11:07 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:32:19.166 17:11:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:19.166 17:11:07 -- common/autotest_common.sh@10 -- # set +x 00:32:19.166 ************************************ 00:32:19.166 START TEST bdev_write_zeroes 00:32:19.166 ************************************ 00:32:19.166 17:11:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:19.166 [2024-11-05 17:11:07.918460] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:32:19.166 [2024-11-05 17:11:07.918667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141714 ] 00:32:19.425 [2024-11-05 17:11:08.080215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.425 [2024-11-05 17:11:08.267470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.992 Running I/O for 1 seconds... 00:32:20.926 00:32:20.926 Latency(us) 00:32:20.926 [2024-11-05T17:11:09.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.926 [2024-11-05T17:11:09.803Z] Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:20.926 raid5f : 1.00 27824.22 108.69 0.00 0.00 4587.03 1355.40 6494.02 00:32:20.926 [2024-11-05T17:11:09.803Z] =================================================================================================================== 00:32:20.926 [2024-11-05T17:11:09.803Z] Total : 27824.22 108.69 0.00 0.00 4587.03 1355.40 6494.02 00:32:22.301 00:32:22.301 real 0m3.145s 00:32:22.301 user 0m2.691s 00:32:22.301 sys 0m0.336s 00:32:22.301 17:11:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:22.301 17:11:10 -- common/autotest_common.sh@10 -- # set +x 00:32:22.301 ************************************ 00:32:22.301 END TEST bdev_write_zeroes 00:32:22.301 ************************************ 00:32:22.301 17:11:11 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:22.301 17:11:11 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:32:22.301 17:11:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:22.301 17:11:11 -- common/autotest_common.sh@10 -- # set +x 00:32:22.301 ************************************ 00:32:22.301 START TEST bdev_json_nonenclosed 00:32:22.301 ************************************ 00:32:22.301 17:11:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:22.301 [2024-11-05 17:11:11.102889] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:32:22.301 [2024-11-05 17:11:11.103050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141777 ] 00:32:22.559 [2024-11-05 17:11:11.253461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.559 [2024-11-05 17:11:11.436417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.559 [2024-11-05 17:11:11.436639] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:32:22.559 [2024-11-05 17:11:11.436687] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:23.125 00:32:23.125 real 0m0.725s 00:32:23.125 user 0m0.477s 00:32:23.125 sys 0m0.148s 00:32:23.125 17:11:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:23.125 17:11:11 -- common/autotest_common.sh@10 -- # set +x 00:32:23.125 ************************************ 00:32:23.125 END TEST bdev_json_nonenclosed 00:32:23.125 ************************************ 00:32:23.125 17:11:11 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:23.125 17:11:11 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:32:23.125 17:11:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:23.125 17:11:11 -- common/autotest_common.sh@10 -- # set +x 00:32:23.125 ************************************ 00:32:23.125 START TEST bdev_json_nonarray 00:32:23.125 ************************************ 00:32:23.125 17:11:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:23.125 [2024-11-05 17:11:11.895504] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:32:23.125 [2024-11-05 17:11:11.895702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141808 ] 00:32:23.383 [2024-11-05 17:11:12.065455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.383 [2024-11-05 17:11:12.250590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.383 [2024-11-05 17:11:12.250806] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:32:23.383 [2024-11-05 17:11:12.250855] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:23.950 00:32:23.950 real 0m0.763s 00:32:23.950 user 0m0.529s 00:32:23.950 sys 0m0.133s 00:32:23.950 17:11:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:23.950 17:11:12 -- common/autotest_common.sh@10 -- # set +x 00:32:23.950 ************************************ 00:32:23.950 END TEST bdev_json_nonarray 00:32:23.950 ************************************ 00:32:23.950 17:11:12 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:32:23.950 17:11:12 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:32:23.950 17:11:12 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:32:23.950 17:11:12 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:32:23.950 17:11:12 -- bdev/blockdev.sh@809 -- # cleanup 00:32:23.950 17:11:12 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:32:23.950 17:11:12 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:23.950 17:11:12 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:32:23.950 17:11:12 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:32:23.950 17:11:12 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:32:23.950 17:11:12 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:32:23.950 00:32:23.950 real 0m46.932s 00:32:23.950 user 1m5.185s 00:32:23.950 sys 0m4.665s 00:32:23.950 17:11:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:23.950 17:11:12 -- common/autotest_common.sh@10 -- # set +x 00:32:23.950 ************************************ 00:32:23.950 END TEST blockdev_raid5f 00:32:23.950 ************************************ 00:32:23.950 17:11:12 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:32:23.950 17:11:12 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:32:23.950 17:11:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:23.950 17:11:12 -- common/autotest_common.sh@10 -- # set +x 00:32:23.950 17:11:12 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:32:23.950 17:11:12 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:32:23.950 17:11:12 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:32:23.950 17:11:12 -- common/autotest_common.sh@10 -- # set +x 00:32:25.327 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:25.327 Waiting for block devices as requested 00:32:25.586 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:32:25.846 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:26.105 Cleaning 00:32:26.105 Removing: /var/run/dpdk/spdk0/config 00:32:26.105 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:26.105 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:26.105 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:26.105 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:26.105 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:26.105 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:26.105 Removing: /dev/shm/spdk_tgt_trace.pid102913 00:32:26.105 Removing: /var/run/dpdk/spdk0 00:32:26.105 Removing: /var/run/dpdk/spdk_pid102681 00:32:26.105 Removing: /var/run/dpdk/spdk_pid102913 00:32:26.105 Removing: /var/run/dpdk/spdk_pid103223 00:32:26.105 Removing: /var/run/dpdk/spdk_pid103482 00:32:26.105 Removing: /var/run/dpdk/spdk_pid103667 00:32:26.105 Removing: /var/run/dpdk/spdk_pid103793 00:32:26.105 Removing: /var/run/dpdk/spdk_pid103905 00:32:26.105 Removing: /var/run/dpdk/spdk_pid104041 00:32:26.105 Removing: /var/run/dpdk/spdk_pid104165 00:32:26.105 Removing: /var/run/dpdk/spdk_pid104206 00:32:26.105 Removing: /var/run/dpdk/spdk_pid104256 00:32:26.105 Removing: /var/run/dpdk/spdk_pid104340 00:32:26.105 Removing: /var/run/dpdk/spdk_pid104460 00:32:26.105 Removing: /var/run/dpdk/spdk_pid104989 00:32:26.105 Removing: /var/run/dpdk/spdk_pid105074 00:32:26.105 Removing: /var/run/dpdk/spdk_pid105156 00:32:26.105 Removing: /var/run/dpdk/spdk_pid105191 00:32:26.105 Removing: /var/run/dpdk/spdk_pid105326 00:32:26.105 Removing: /var/run/dpdk/spdk_pid105354 00:32:26.105 Removing: /var/run/dpdk/spdk_pid105490 00:32:26.105 Removing: /var/run/dpdk/spdk_pid105525 00:32:26.105 Removing: /var/run/dpdk/spdk_pid105589 00:32:26.105 Removing: /var/run/dpdk/spdk_pid105619 00:32:26.105 Removing: /var/run/dpdk/spdk_pid105683 00:32:26.105 Removing: /var/run/dpdk/spdk_pid105722 00:32:26.105 Removing: /var/run/dpdk/spdk_pid105919 00:32:26.105 Removing: /var/run/dpdk/spdk_pid105964 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106014 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106100 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106196 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106235 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106334 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106371 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106416 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106453 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106503 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106540 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106595 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106632 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106677 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106715 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106767 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106802 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106853 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106887 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106941 00:32:26.105 Removing: /var/run/dpdk/spdk_pid106976 00:32:26.105 Removing: /var/run/dpdk/spdk_pid107021 00:32:26.105 Removing: /var/run/dpdk/spdk_pid107063 00:32:26.105 Removing: /var/run/dpdk/spdk_pid107110 00:32:26.105 Removing: /var/run/dpdk/spdk_pid107143 00:32:26.105 Removing: /var/run/dpdk/spdk_pid107201 00:32:26.105 Removing: /var/run/dpdk/spdk_pid107235 00:32:26.105 Removing: /var/run/dpdk/spdk_pid107280 00:32:26.105 Removing: /var/run/dpdk/spdk_pid107318 00:32:26.105 Removing: /var/run/dpdk/spdk_pid107370 00:32:26.105 Removing: /var/run/dpdk/spdk_pid107405 00:32:26.105 Removing: /var/run/dpdk/spdk_pid107456 00:32:26.105 Removing: /var/run/dpdk/spdk_pid107500 00:32:26.105 Removing: /var/run/dpdk/spdk_pid107545 00:32:26.364 Removing: /var/run/dpdk/spdk_pid107587 00:32:26.364 Removing: /var/run/dpdk/spdk_pid107632 00:32:26.364 Removing: /var/run/dpdk/spdk_pid107669 00:32:26.364 Removing: /var/run/dpdk/spdk_pid107727 00:32:26.364 Removing: /var/run/dpdk/spdk_pid107765 00:32:26.364 Removing: /var/run/dpdk/spdk_pid107813 00:32:26.364 Removing: /var/run/dpdk/spdk_pid107860 00:32:26.364 Removing: /var/run/dpdk/spdk_pid107908 00:32:26.364 Removing: /var/run/dpdk/spdk_pid107954 00:32:26.364 Removing: /var/run/dpdk/spdk_pid108000 00:32:26.364 Removing: /var/run/dpdk/spdk_pid108035 00:32:26.364 Removing: /var/run/dpdk/spdk_pid108090 00:32:26.364 Removing: /var/run/dpdk/spdk_pid108184 00:32:26.364 Removing: /var/run/dpdk/spdk_pid108317 00:32:26.364 Removing: /var/run/dpdk/spdk_pid108523 00:32:26.364 Removing: /var/run/dpdk/spdk_pid108604 00:32:26.364 Removing: /var/run/dpdk/spdk_pid108661 00:32:26.364 Removing: /var/run/dpdk/spdk_pid109884 00:32:26.364 Removing: /var/run/dpdk/spdk_pid110114 00:32:26.364 Removing: /var/run/dpdk/spdk_pid110315 00:32:26.364 Removing: /var/run/dpdk/spdk_pid110440 00:32:26.364 Removing: /var/run/dpdk/spdk_pid110580 00:32:26.364 Removing: /var/run/dpdk/spdk_pid110654 00:32:26.365 Removing: /var/run/dpdk/spdk_pid110692 00:32:26.365 Removing: /var/run/dpdk/spdk_pid110731 00:32:26.365 Removing: /var/run/dpdk/spdk_pid111200 00:32:26.365 Removing: /var/run/dpdk/spdk_pid111296 00:32:26.365 Removing: /var/run/dpdk/spdk_pid111416 00:32:26.365 Removing: /var/run/dpdk/spdk_pid111476 00:32:26.365 Removing: /var/run/dpdk/spdk_pid112696 00:32:26.365 Removing: /var/run/dpdk/spdk_pid113597 00:32:26.365 Removing: /var/run/dpdk/spdk_pid114494 00:32:26.365 Removing: /var/run/dpdk/spdk_pid115640 00:32:26.365 Removing: /var/run/dpdk/spdk_pid116742 00:32:26.365 Removing: /var/run/dpdk/spdk_pid117834 00:32:26.365 Removing: /var/run/dpdk/spdk_pid119334 00:32:26.365 Removing: /var/run/dpdk/spdk_pid120535 00:32:26.365 Removing: /var/run/dpdk/spdk_pid121744 00:32:26.365 Removing: /var/run/dpdk/spdk_pid122418 00:32:26.365 Removing: /var/run/dpdk/spdk_pid122960 00:32:26.365 Removing: /var/run/dpdk/spdk_pid123606 00:32:26.365 Removing: /var/run/dpdk/spdk_pid124099 00:32:26.365 Removing: /var/run/dpdk/spdk_pid124668 00:32:26.365 Removing: /var/run/dpdk/spdk_pid125209 00:32:26.365 Removing: /var/run/dpdk/spdk_pid125859 00:32:26.365 Removing: /var/run/dpdk/spdk_pid126371 00:32:26.365 Removing: /var/run/dpdk/spdk_pid127730 00:32:26.365 Removing: /var/run/dpdk/spdk_pid128334 00:32:26.365 Removing: /var/run/dpdk/spdk_pid128875 00:32:26.365 Removing: /var/run/dpdk/spdk_pid130393 00:32:26.365 Removing: /var/run/dpdk/spdk_pid131056 00:32:26.365 Removing: /var/run/dpdk/spdk_pid131666 00:32:26.365 Removing: /var/run/dpdk/spdk_pid132442 00:32:26.365 Removing: /var/run/dpdk/spdk_pid132496 00:32:26.365 Removing: /var/run/dpdk/spdk_pid132550 00:32:26.365 Removing: /var/run/dpdk/spdk_pid132609 00:32:26.365 Removing: /var/run/dpdk/spdk_pid132759 00:32:26.365 Removing: /var/run/dpdk/spdk_pid132912 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133153 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133460 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133475 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133537 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133560 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133593 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133620 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133645 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133673 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133705 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133734 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133755 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133787 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133814 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133840 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133881 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133902 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133930 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133962 00:32:26.365 Removing: /var/run/dpdk/spdk_pid133982 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134015 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134065 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134089 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134136 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134225 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134268 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134296 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134336 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134369 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134384 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134449 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134475 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134516 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134546 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134563 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134592 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134609 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134633 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134658 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134679 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134726 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134773 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134800 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134838 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134870 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134886 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134952 00:32:26.631 Removing: /var/run/dpdk/spdk_pid134978 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135020 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135047 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135064 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135088 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135119 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135136 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135160 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135181 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135279 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135375 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135522 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135554 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135608 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135671 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135716 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135740 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135774 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135818 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135851 00:32:26.631 Removing: /var/run/dpdk/spdk_pid135943 00:32:26.631 Removing: /var/run/dpdk/spdk_pid136011 00:32:26.631 Removing: /var/run/dpdk/spdk_pid136056 00:32:26.631 Removing: /var/run/dpdk/spdk_pid136328 00:32:26.631 Removing: /var/run/dpdk/spdk_pid136456 00:32:26.631 Removing: /var/run/dpdk/spdk_pid136503 00:32:26.631 Removing: /var/run/dpdk/spdk_pid136607 00:32:26.631 Removing: /var/run/dpdk/spdk_pid136699 00:32:26.631 Removing: /var/run/dpdk/spdk_pid136744 00:32:26.631 Removing: /var/run/dpdk/spdk_pid136995 00:32:26.631 Removing: /var/run/dpdk/spdk_pid137142 00:32:26.631 Removing: /var/run/dpdk/spdk_pid137249 00:32:26.631 Removing: /var/run/dpdk/spdk_pid137307 00:32:26.631 Removing: /var/run/dpdk/spdk_pid137345 00:32:26.631 Removing: /var/run/dpdk/spdk_pid137420 00:32:26.631 Removing: /var/run/dpdk/spdk_pid137863 00:32:26.631 Removing: /var/run/dpdk/spdk_pid137908 00:32:26.631 Removing: /var/run/dpdk/spdk_pid138223 00:32:26.631 Removing: /var/run/dpdk/spdk_pid138325 00:32:26.631 Removing: /var/run/dpdk/spdk_pid138433 00:32:26.631 Removing: /var/run/dpdk/spdk_pid138490 00:32:26.631 Removing: /var/run/dpdk/spdk_pid138529 00:32:26.631 Removing: /var/run/dpdk/spdk_pid138552 00:32:26.631 Removing: /var/run/dpdk/spdk_pid139923 00:32:26.631 Removing: /var/run/dpdk/spdk_pid140072 00:32:26.631 Removing: /var/run/dpdk/spdk_pid140076 00:32:26.631 Removing: /var/run/dpdk/spdk_pid140108 00:32:26.631 Removing: /var/run/dpdk/spdk_pid140606 00:32:26.631 Removing: /var/run/dpdk/spdk_pid140729 00:32:26.631 Removing: /var/run/dpdk/spdk_pid140893 00:32:26.631 Removing: /var/run/dpdk/spdk_pid140969 00:32:26.631 Removing: /var/run/dpdk/spdk_pid141019 00:32:26.631 Removing: /var/run/dpdk/spdk_pid141309 00:32:26.889 Removing: /var/run/dpdk/spdk_pid141504 00:32:26.889 Removing: /var/run/dpdk/spdk_pid141607 00:32:26.889 Removing: /var/run/dpdk/spdk_pid141714 00:32:26.889 Removing: /var/run/dpdk/spdk_pid141777 00:32:26.889 Removing: /var/run/dpdk/spdk_pid141808 00:32:26.889 Clean 00:32:26.889 killing process with pid 92495 00:32:26.889 killing process with pid 92496 00:32:26.889 17:11:15 -- common/autotest_common.sh@1446 -- # return 0 00:32:26.889 17:11:15 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:32:26.889 17:11:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:26.889 17:11:15 -- common/autotest_common.sh@10 -- # set +x 00:32:26.889 17:11:15 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:32:26.889 17:11:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:26.889 17:11:15 -- common/autotest_common.sh@10 -- # set +x 00:32:27.148 17:11:15 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:27.148 17:11:15 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:27.148 17:11:15 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:27.148 17:11:15 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:32:27.148 17:11:15 -- spdk/autotest.sh@383 -- # hostname 00:32:27.148 17:11:15 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:27.406 geninfo: WARNING: invalid characters removed from testname! 00:33:06.147 17:11:52 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:08.678 17:11:57 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:11.208 17:11:59 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:14.490 17:12:02 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:17.018 17:12:05 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:19.546 17:12:08 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:22.084 17:12:10 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:22.084 17:12:10 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:33:22.084 17:12:10 -- common/autotest_common.sh@1690 -- $ lcov --version 00:33:22.084 17:12:10 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:33:22.084 17:12:10 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:33:22.084 17:12:10 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:33:22.084 17:12:10 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:33:22.084 17:12:10 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:33:22.084 17:12:10 -- scripts/common.sh@335 -- $ IFS=.-: 00:33:22.084 17:12:10 -- scripts/common.sh@335 -- $ read -ra ver1 00:33:22.084 17:12:10 -- scripts/common.sh@336 -- $ IFS=.-: 00:33:22.084 17:12:10 -- scripts/common.sh@336 -- $ read -ra ver2 00:33:22.084 17:12:10 -- scripts/common.sh@337 -- $ local 'op=<' 00:33:22.084 17:12:10 -- scripts/common.sh@339 -- $ ver1_l=2 00:33:22.084 17:12:10 -- scripts/common.sh@340 -- $ ver2_l=1 00:33:22.084 17:12:10 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:33:22.084 17:12:10 -- scripts/common.sh@343 -- $ case "$op" in 00:33:22.084 17:12:10 -- scripts/common.sh@344 -- $ : 1 00:33:22.084 17:12:10 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:33:22.084 17:12:10 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:22.084 17:12:10 -- scripts/common.sh@364 -- $ decimal 1 00:33:22.084 17:12:10 -- scripts/common.sh@352 -- $ local d=1 00:33:22.084 17:12:10 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:33:22.084 17:12:10 -- scripts/common.sh@354 -- $ echo 1 00:33:22.084 17:12:10 -- scripts/common.sh@364 -- $ ver1[v]=1 00:33:22.084 17:12:10 -- scripts/common.sh@365 -- $ decimal 2 00:33:22.084 17:12:10 -- scripts/common.sh@352 -- $ local d=2 00:33:22.084 17:12:10 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:33:22.084 17:12:10 -- scripts/common.sh@354 -- $ echo 2 00:33:22.084 17:12:10 -- scripts/common.sh@365 -- $ ver2[v]=2 00:33:22.084 17:12:10 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:33:22.084 17:12:10 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:33:22.084 17:12:10 -- scripts/common.sh@367 -- $ return 0 00:33:22.084 17:12:10 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:22.084 17:12:10 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:33:22.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.084 --rc genhtml_branch_coverage=1 00:33:22.084 --rc genhtml_function_coverage=1 00:33:22.084 --rc genhtml_legend=1 00:33:22.084 --rc geninfo_all_blocks=1 00:33:22.084 --rc geninfo_unexecuted_blocks=1 00:33:22.084 00:33:22.084 ' 00:33:22.084 17:12:10 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:33:22.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.084 --rc genhtml_branch_coverage=1 00:33:22.084 --rc genhtml_function_coverage=1 00:33:22.084 --rc genhtml_legend=1 00:33:22.084 --rc geninfo_all_blocks=1 00:33:22.084 --rc geninfo_unexecuted_blocks=1 00:33:22.084 00:33:22.084 ' 00:33:22.084 17:12:10 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:33:22.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.084 --rc genhtml_branch_coverage=1 00:33:22.084 --rc genhtml_function_coverage=1 00:33:22.084 --rc genhtml_legend=1 00:33:22.084 --rc geninfo_all_blocks=1 00:33:22.084 --rc geninfo_unexecuted_blocks=1 00:33:22.084 00:33:22.084 ' 00:33:22.084 17:12:10 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:33:22.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.084 --rc genhtml_branch_coverage=1 00:33:22.084 --rc genhtml_function_coverage=1 00:33:22.084 --rc genhtml_legend=1 00:33:22.084 --rc geninfo_all_blocks=1 00:33:22.084 --rc geninfo_unexecuted_blocks=1 00:33:22.084 00:33:22.084 ' 00:33:22.084 17:12:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:22.084 17:12:10 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:22.084 17:12:10 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.084 17:12:10 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.084 17:12:10 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:22.084 17:12:10 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:22.084 17:12:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:22.084 17:12:10 -- paths/export.sh@5 -- $ export PATH 00:33:22.084 17:12:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:22.084 17:12:10 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:22.084 17:12:10 -- common/autobuild_common.sh@440 -- $ date +%s 00:33:22.084 17:12:10 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1730826730.XXXXXX 00:33:22.084 17:12:10 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1730826730.6qXTWL 00:33:22.084 17:12:10 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:33:22.084 17:12:10 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:33:22.084 17:12:10 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:33:22.084 17:12:10 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:22.084 17:12:10 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:22.084 17:12:10 -- common/autobuild_common.sh@456 -- $ get_config_params 00:33:22.084 17:12:10 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:33:22.084 17:12:10 -- common/autotest_common.sh@10 -- $ set +x 00:33:22.343 17:12:10 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:33:22.343 17:12:10 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:33:22.343 17:12:10 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:33:22.343 17:12:10 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:22.343 17:12:10 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:22.343 17:12:10 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:22.343 17:12:10 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:33:22.343 17:12:10 -- common/autotest_common.sh@722 -- $ xtrace_disable 00:33:22.343 17:12:10 -- common/autotest_common.sh@10 -- $ set +x 00:33:22.343 17:12:11 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:33:22.343 17:12:11 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:33:22.343 17:12:11 -- spdk/autopackage.sh@40 -- $ get_config_params 00:33:22.343 17:12:11 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:33:22.343 17:12:11 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:33:22.343 17:12:11 -- common/autotest_common.sh@10 -- $ set +x 00:33:22.343 17:12:11 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:33:22.343 17:12:11 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --enable-lto 00:33:22.343 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:33:22.343 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:33:22.600 Using 'verbs' RDMA provider 00:33:35.402 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:33:45.372 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:33:45.631 Creating mk/config.mk...done. 00:33:45.631 Creating mk/cc.flags.mk...done. 00:33:45.631 Type 'make' to build. 00:33:45.631 17:12:34 -- spdk/autopackage.sh@43 -- $ make -j10 00:33:45.889 make[1]: Nothing to be done for 'all'. 00:33:51.155 The Meson build system 00:33:51.155 Version: 1.4.0 00:33:51.155 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:33:51.155 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:33:51.155 Build type: native build 00:33:51.155 Program cat found: YES (/usr/bin/cat) 00:33:51.155 Project name: DPDK 00:33:51.155 Project version: 23.11.0 00:33:51.155 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:33:51.155 C linker for the host machine: cc ld.bfd 2.38 00:33:51.155 Host machine cpu family: x86_64 00:33:51.155 Host machine cpu: x86_64 00:33:51.155 Message: ## Building in Developer Mode ## 00:33:51.155 Program pkg-config found: YES (/usr/bin/pkg-config) 00:33:51.155 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:33:51.155 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:33:51.155 Program python3 found: YES (/usr/bin/python3) 00:33:51.155 Program cat found: YES (/usr/bin/cat) 00:33:51.155 Compiler for C supports arguments -march=native: YES 00:33:51.155 Checking for size of "void *" : 8 00:33:51.155 Checking for size of "void *" : 8 (cached) 00:33:51.155 Library m found: YES 00:33:51.155 Library numa found: YES 00:33:51.155 Has header "numaif.h" : YES 00:33:51.155 Library fdt found: NO 00:33:51.155 Library execinfo found: NO 00:33:51.155 Has header "execinfo.h" : YES 00:33:51.155 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:33:51.155 Run-time dependency libarchive found: NO (tried pkgconfig) 00:33:51.155 Run-time dependency libbsd found: NO (tried pkgconfig) 00:33:51.155 Run-time dependency jansson found: NO (tried pkgconfig) 00:33:51.155 Run-time dependency openssl found: YES 3.0.2 00:33:51.155 Run-time dependency libpcap found: NO (tried pkgconfig) 00:33:51.155 Library pcap found: NO 00:33:51.155 Compiler for C supports arguments -Wcast-qual: YES 00:33:51.155 Compiler for C supports arguments -Wdeprecated: YES 00:33:51.155 Compiler for C supports arguments -Wformat: YES 00:33:51.155 Compiler for C supports arguments -Wformat-nonliteral: YES 00:33:51.155 Compiler for C supports arguments -Wformat-security: YES 00:33:51.155 Compiler for C supports arguments -Wmissing-declarations: YES 00:33:51.155 Compiler for C supports arguments -Wmissing-prototypes: YES 00:33:51.155 Compiler for C supports arguments -Wnested-externs: YES 00:33:51.155 Compiler for C supports arguments -Wold-style-definition: YES 00:33:51.155 Compiler for C supports arguments -Wpointer-arith: YES 00:33:51.155 Compiler for C supports arguments -Wsign-compare: YES 00:33:51.155 Compiler for C supports arguments -Wstrict-prototypes: YES 00:33:51.155 Compiler for C supports arguments -Wundef: YES 00:33:51.155 Compiler for C supports arguments -Wwrite-strings: YES 00:33:51.155 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:33:51.155 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:33:51.155 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:33:51.155 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:33:51.155 Program objdump found: YES (/usr/bin/objdump) 00:33:51.155 Compiler for C supports arguments -mavx512f: YES 00:33:51.155 Checking if "AVX512 checking" compiles: YES 00:33:51.155 Fetching value of define "__SSE4_2__" : 1 00:33:51.155 Fetching value of define "__AES__" : 1 00:33:51.155 Fetching value of define "__AVX__" : 1 00:33:51.155 Fetching value of define "__AVX2__" : 1 00:33:51.155 Fetching value of define "__AVX512BW__" : (undefined) 00:33:51.155 Fetching value of define "__AVX512CD__" : (undefined) 00:33:51.155 Fetching value of define "__AVX512DQ__" : (undefined) 00:33:51.155 Fetching value of define "__AVX512F__" : (undefined) 00:33:51.155 Fetching value of define "__AVX512VL__" : (undefined) 00:33:51.155 Fetching value of define "__PCLMUL__" : 1 00:33:51.155 Fetching value of define "__RDRND__" : 1 00:33:51.155 Fetching value of define "__RDSEED__" : 1 00:33:51.155 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:33:51.155 Fetching value of define "__znver1__" : (undefined) 00:33:51.155 Fetching value of define "__znver2__" : (undefined) 00:33:51.155 Fetching value of define "__znver3__" : (undefined) 00:33:51.155 Fetching value of define "__znver4__" : (undefined) 00:33:51.155 Compiler for C supports arguments -ffat-lto-objects: YES 00:33:51.155 Library asan found: YES 00:33:51.155 Compiler for C supports arguments -Wno-format-truncation: YES 00:33:51.155 Message: lib/log: Defining dependency "log" 00:33:51.155 Message: lib/kvargs: Defining dependency "kvargs" 00:33:51.155 Message: lib/telemetry: Defining dependency "telemetry" 00:33:51.155 Library rt found: YES 00:33:51.155 Checking for function "getentropy" : NO 00:33:51.155 Message: lib/eal: Defining dependency "eal" 00:33:51.155 Message: lib/ring: Defining dependency "ring" 00:33:51.155 Message: lib/rcu: Defining dependency "rcu" 00:33:51.155 Message: lib/mempool: Defining dependency "mempool" 00:33:51.155 Message: lib/mbuf: Defining dependency "mbuf" 00:33:51.155 Fetching value of define "__PCLMUL__" : 1 (cached) 00:33:51.155 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:33:51.155 Compiler for C supports arguments -mpclmul: YES 00:33:51.155 Compiler for C supports arguments -maes: YES 00:33:51.155 Compiler for C supports arguments -mavx512f: YES (cached) 00:33:51.155 Compiler for C supports arguments -mavx512bw: YES 00:33:51.155 Compiler for C supports arguments -mavx512dq: YES 00:33:51.155 Compiler for C supports arguments -mavx512vl: YES 00:33:51.155 Compiler for C supports arguments -mvpclmulqdq: YES 00:33:51.155 Compiler for C supports arguments -mavx2: YES 00:33:51.155 Compiler for C supports arguments -mavx: YES 00:33:51.155 Message: lib/net: Defining dependency "net" 00:33:51.155 Message: lib/meter: Defining dependency "meter" 00:33:51.155 Message: lib/ethdev: Defining dependency "ethdev" 00:33:51.155 Message: lib/pci: Defining dependency "pci" 00:33:51.155 Message: lib/cmdline: Defining dependency "cmdline" 00:33:51.155 Message: lib/hash: Defining dependency "hash" 00:33:51.155 Message: lib/timer: Defining dependency "timer" 00:33:51.155 Message: lib/compressdev: Defining dependency "compressdev" 00:33:51.155 Message: lib/cryptodev: Defining dependency "cryptodev" 00:33:51.155 Message: lib/dmadev: Defining dependency "dmadev" 00:33:51.155 Compiler for C supports arguments -Wno-cast-qual: YES 00:33:51.155 Message: lib/power: Defining dependency "power" 00:33:51.155 Message: lib/reorder: Defining dependency "reorder" 00:33:51.155 Message: lib/security: Defining dependency "security" 00:33:51.155 Has header "linux/userfaultfd.h" : YES 00:33:51.155 Has header "linux/vduse.h" : YES 00:33:51.155 Message: lib/vhost: Defining dependency "vhost" 00:33:51.155 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:33:51.155 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:33:51.155 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:33:51.155 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:33:51.155 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:33:51.155 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:33:51.155 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:33:51.155 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:33:51.155 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:33:51.155 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:33:51.155 Program doxygen found: YES (/usr/bin/doxygen) 00:33:51.155 Configuring doxy-api-html.conf using configuration 00:33:51.155 Configuring doxy-api-man.conf using configuration 00:33:51.155 Program mandb found: YES (/usr/bin/mandb) 00:33:51.155 Program sphinx-build found: NO 00:33:51.155 Configuring rte_build_config.h using configuration 00:33:51.155 Message: 00:33:51.155 ================= 00:33:51.155 Applications Enabled 00:33:51.155 ================= 00:33:51.155 00:33:51.155 apps: 00:33:51.155 00:33:51.155 00:33:51.155 Message: 00:33:51.155 ================= 00:33:51.155 Libraries Enabled 00:33:51.155 ================= 00:33:51.155 00:33:51.155 libs: 00:33:51.155 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:33:51.155 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:33:51.155 cryptodev, dmadev, power, reorder, security, vhost, 00:33:51.155 00:33:51.155 Message: 00:33:51.155 =============== 00:33:51.155 Drivers Enabled 00:33:51.155 =============== 00:33:51.155 00:33:51.155 common: 00:33:51.155 00:33:51.155 bus: 00:33:51.155 pci, vdev, 00:33:51.155 mempool: 00:33:51.155 ring, 00:33:51.155 dma: 00:33:51.155 00:33:51.155 net: 00:33:51.155 00:33:51.155 crypto: 00:33:51.155 00:33:51.155 compress: 00:33:51.155 00:33:51.155 vdpa: 00:33:51.155 00:33:51.155 00:33:51.155 Message: 00:33:51.155 ================= 00:33:51.155 Content Skipped 00:33:51.155 ================= 00:33:51.155 00:33:51.155 apps: 00:33:51.155 dumpcap: explicitly disabled via build config 00:33:51.155 graph: explicitly disabled via build config 00:33:51.156 pdump: explicitly disabled via build config 00:33:51.156 proc-info: explicitly disabled via build config 00:33:51.156 test-acl: explicitly disabled via build config 00:33:51.156 test-bbdev: explicitly disabled via build config 00:33:51.156 test-cmdline: explicitly disabled via build config 00:33:51.156 test-compress-perf: explicitly disabled via build config 00:33:51.156 test-crypto-perf: explicitly disabled via build config 00:33:51.156 test-dma-perf: explicitly disabled via build config 00:33:51.156 test-eventdev: explicitly disabled via build config 00:33:51.156 test-fib: explicitly disabled via build config 00:33:51.156 test-flow-perf: explicitly disabled via build config 00:33:51.156 test-gpudev: explicitly disabled via build config 00:33:51.156 test-mldev: explicitly disabled via build config 00:33:51.156 test-pipeline: explicitly disabled via build config 00:33:51.156 test-pmd: explicitly disabled via build config 00:33:51.156 test-regex: explicitly disabled via build config 00:33:51.156 test-sad: explicitly disabled via build config 00:33:51.156 test-security-perf: explicitly disabled via build config 00:33:51.156 00:33:51.156 libs: 00:33:51.156 metrics: explicitly disabled via build config 00:33:51.156 acl: explicitly disabled via build config 00:33:51.156 bbdev: explicitly disabled via build config 00:33:51.156 bitratestats: explicitly disabled via build config 00:33:51.156 bpf: explicitly disabled via build config 00:33:51.156 cfgfile: explicitly disabled via build config 00:33:51.156 distributor: explicitly disabled via build config 00:33:51.156 efd: explicitly disabled via build config 00:33:51.156 eventdev: explicitly disabled via build config 00:33:51.156 dispatcher: explicitly disabled via build config 00:33:51.156 gpudev: explicitly disabled via build config 00:33:51.156 gro: explicitly disabled via build config 00:33:51.156 gso: explicitly disabled via build config 00:33:51.156 ip_frag: explicitly disabled via build config 00:33:51.156 jobstats: explicitly disabled via build config 00:33:51.156 latencystats: explicitly disabled via build config 00:33:51.156 lpm: explicitly disabled via build config 00:33:51.156 member: explicitly disabled via build config 00:33:51.156 pcapng: explicitly disabled via build config 00:33:51.156 rawdev: explicitly disabled via build config 00:33:51.156 regexdev: explicitly disabled via build config 00:33:51.156 mldev: explicitly disabled via build config 00:33:51.156 rib: explicitly disabled via build config 00:33:51.156 sched: explicitly disabled via build config 00:33:51.156 stack: explicitly disabled via build config 00:33:51.156 ipsec: explicitly disabled via build config 00:33:51.156 pdcp: explicitly disabled via build config 00:33:51.156 fib: explicitly disabled via build config 00:33:51.156 port: explicitly disabled via build config 00:33:51.156 pdump: explicitly disabled via build config 00:33:51.156 table: explicitly disabled via build config 00:33:51.156 pipeline: explicitly disabled via build config 00:33:51.156 graph: explicitly disabled via build config 00:33:51.156 node: explicitly disabled via build config 00:33:51.156 00:33:51.156 drivers: 00:33:51.156 common/cpt: not in enabled drivers build config 00:33:51.156 common/dpaax: not in enabled drivers build config 00:33:51.156 common/iavf: not in enabled drivers build config 00:33:51.156 common/idpf: not in enabled drivers build config 00:33:51.156 common/mvep: not in enabled drivers build config 00:33:51.156 common/octeontx: not in enabled drivers build config 00:33:51.156 bus/auxiliary: not in enabled drivers build config 00:33:51.156 bus/cdx: not in enabled drivers build config 00:33:51.156 bus/dpaa: not in enabled drivers build config 00:33:51.156 bus/fslmc: not in enabled drivers build config 00:33:51.156 bus/ifpga: not in enabled drivers build config 00:33:51.156 bus/platform: not in enabled drivers build config 00:33:51.156 bus/vmbus: not in enabled drivers build config 00:33:51.156 common/cnxk: not in enabled drivers build config 00:33:51.156 common/mlx5: not in enabled drivers build config 00:33:51.156 common/nfp: not in enabled drivers build config 00:33:51.156 common/qat: not in enabled drivers build config 00:33:51.156 common/sfc_efx: not in enabled drivers build config 00:33:51.156 mempool/bucket: not in enabled drivers build config 00:33:51.156 mempool/cnxk: not in enabled drivers build config 00:33:51.156 mempool/dpaa: not in enabled drivers build config 00:33:51.156 mempool/dpaa2: not in enabled drivers build config 00:33:51.156 mempool/octeontx: not in enabled drivers build config 00:33:51.156 mempool/stack: not in enabled drivers build config 00:33:51.156 dma/cnxk: not in enabled drivers build config 00:33:51.156 dma/dpaa: not in enabled drivers build config 00:33:51.156 dma/dpaa2: not in enabled drivers build config 00:33:51.156 dma/hisilicon: not in enabled drivers build config 00:33:51.156 dma/idxd: not in enabled drivers build config 00:33:51.156 dma/ioat: not in enabled drivers build config 00:33:51.156 dma/skeleton: not in enabled drivers build config 00:33:51.156 net/af_packet: not in enabled drivers build config 00:33:51.156 net/af_xdp: not in enabled drivers build config 00:33:51.156 net/ark: not in enabled drivers build config 00:33:51.156 net/atlantic: not in enabled drivers build config 00:33:51.156 net/avp: not in enabled drivers build config 00:33:51.156 net/axgbe: not in enabled drivers build config 00:33:51.156 net/bnx2x: not in enabled drivers build config 00:33:51.156 net/bnxt: not in enabled drivers build config 00:33:51.156 net/bonding: not in enabled drivers build config 00:33:51.156 net/cnxk: not in enabled drivers build config 00:33:51.156 net/cpfl: not in enabled drivers build config 00:33:51.156 net/cxgbe: not in enabled drivers build config 00:33:51.156 net/dpaa: not in enabled drivers build config 00:33:51.156 net/dpaa2: not in enabled drivers build config 00:33:51.156 net/e1000: not in enabled drivers build config 00:33:51.156 net/ena: not in enabled drivers build config 00:33:51.156 net/enetc: not in enabled drivers build config 00:33:51.156 net/enetfec: not in enabled drivers build config 00:33:51.156 net/enic: not in enabled drivers build config 00:33:51.156 net/failsafe: not in enabled drivers build config 00:33:51.156 net/fm10k: not in enabled drivers build config 00:33:51.156 net/gve: not in enabled drivers build config 00:33:51.156 net/hinic: not in enabled drivers build config 00:33:51.156 net/hns3: not in enabled drivers build config 00:33:51.156 net/i40e: not in enabled drivers build config 00:33:51.156 net/iavf: not in enabled drivers build config 00:33:51.156 net/ice: not in enabled drivers build config 00:33:51.156 net/idpf: not in enabled drivers build config 00:33:51.156 net/igc: not in enabled drivers build config 00:33:51.156 net/ionic: not in enabled drivers build config 00:33:51.156 net/ipn3ke: not in enabled drivers build config 00:33:51.156 net/ixgbe: not in enabled drivers build config 00:33:51.156 net/mana: not in enabled drivers build config 00:33:51.156 net/memif: not in enabled drivers build config 00:33:51.156 net/mlx4: not in enabled drivers build config 00:33:51.156 net/mlx5: not in enabled drivers build config 00:33:51.156 net/mvneta: not in enabled drivers build config 00:33:51.156 net/mvpp2: not in enabled drivers build config 00:33:51.156 net/netvsc: not in enabled drivers build config 00:33:51.156 net/nfb: not in enabled drivers build config 00:33:51.156 net/nfp: not in enabled drivers build config 00:33:51.156 net/ngbe: not in enabled drivers build config 00:33:51.156 net/null: not in enabled drivers build config 00:33:51.156 net/octeontx: not in enabled drivers build config 00:33:51.156 net/octeon_ep: not in enabled drivers build config 00:33:51.156 net/pcap: not in enabled drivers build config 00:33:51.156 net/pfe: not in enabled drivers build config 00:33:51.156 net/qede: not in enabled drivers build config 00:33:51.156 net/ring: not in enabled drivers build config 00:33:51.156 net/sfc: not in enabled drivers build config 00:33:51.156 net/softnic: not in enabled drivers build config 00:33:51.156 net/tap: not in enabled drivers build config 00:33:51.156 net/thunderx: not in enabled drivers build config 00:33:51.156 net/txgbe: not in enabled drivers build config 00:33:51.156 net/vdev_netvsc: not in enabled drivers build config 00:33:51.156 net/vhost: not in enabled drivers build config 00:33:51.156 net/virtio: not in enabled drivers build config 00:33:51.156 net/vmxnet3: not in enabled drivers build config 00:33:51.156 raw/*: missing internal dependency, "rawdev" 00:33:51.156 crypto/armv8: not in enabled drivers build config 00:33:51.156 crypto/bcmfs: not in enabled drivers build config 00:33:51.156 crypto/caam_jr: not in enabled drivers build config 00:33:51.156 crypto/ccp: not in enabled drivers build config 00:33:51.156 crypto/cnxk: not in enabled drivers build config 00:33:51.156 crypto/dpaa_sec: not in enabled drivers build config 00:33:51.156 crypto/dpaa2_sec: not in enabled drivers build config 00:33:51.156 crypto/ipsec_mb: not in enabled drivers build config 00:33:51.156 crypto/mlx5: not in enabled drivers build config 00:33:51.156 crypto/mvsam: not in enabled drivers build config 00:33:51.156 crypto/nitrox: not in enabled drivers build config 00:33:51.156 crypto/null: not in enabled drivers build config 00:33:51.156 crypto/octeontx: not in enabled drivers build config 00:33:51.156 crypto/openssl: not in enabled drivers build config 00:33:51.156 crypto/scheduler: not in enabled drivers build config 00:33:51.156 crypto/uadk: not in enabled drivers build config 00:33:51.156 crypto/virtio: not in enabled drivers build config 00:33:51.156 compress/isal: not in enabled drivers build config 00:33:51.156 compress/mlx5: not in enabled drivers build config 00:33:51.156 compress/octeontx: not in enabled drivers build config 00:33:51.156 compress/zlib: not in enabled drivers build config 00:33:51.156 regex/*: missing internal dependency, "regexdev" 00:33:51.156 ml/*: missing internal dependency, "mldev" 00:33:51.156 vdpa/ifc: not in enabled drivers build config 00:33:51.156 vdpa/mlx5: not in enabled drivers build config 00:33:51.156 vdpa/nfp: not in enabled drivers build config 00:33:51.156 vdpa/sfc: not in enabled drivers build config 00:33:51.156 event/*: missing internal dependency, "eventdev" 00:33:51.156 baseband/*: missing internal dependency, "bbdev" 00:33:51.156 gpu/*: missing internal dependency, "gpudev" 00:33:51.156 00:33:51.156 00:33:51.156 Build targets in project: 85 00:33:51.156 00:33:51.156 DPDK 23.11.0 00:33:51.156 00:33:51.156 User defined options 00:33:51.156 default_library : static 00:33:51.156 libdir : lib 00:33:51.156 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:33:51.156 b_lto : true 00:33:51.156 b_sanitize : address 00:33:51.156 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon 00:33:51.156 c_link_args : 00:33:51.156 cpu_instruction_set: native 00:33:51.156 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:33:51.156 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:33:51.156 enable_docs : false 00:33:51.156 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:33:51.156 enable_kmods : false 00:33:51.156 tests : false 00:33:51.156 00:33:51.157 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:33:51.723 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:33:51.723 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:33:51.723 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:33:51.723 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:33:51.723 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:33:51.723 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:33:51.723 [6/265] Linking static target lib/librte_kvargs.a 00:33:51.981 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:33:51.981 [8/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:33:51.981 [9/265] Linking static target lib/librte_log.a 00:33:51.981 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:33:51.981 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:33:51.981 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:33:52.240 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:33:52.240 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:33:52.240 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:33:52.240 [16/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:33:52.499 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:33:52.499 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:33:52.499 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:33:52.499 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:33:52.757 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:33:52.757 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:33:52.757 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:33:52.757 [24/265] Linking target lib/librte_log.so.24.0 00:33:52.757 [25/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:33:52.757 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:33:53.017 [27/265] Linking target lib/librte_kvargs.so.24.0 00:33:53.017 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:33:53.017 [29/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:33:53.017 [30/265] Linking static target lib/librte_telemetry.a 00:33:53.017 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:33:53.017 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:33:53.017 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:33:53.017 [34/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:33:53.276 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:33:53.276 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:33:53.276 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:33:53.276 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:33:53.276 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:33:53.276 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:33:53.276 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:33:53.535 [42/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:33:53.535 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:33:53.535 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:33:53.535 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:33:53.793 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:33:53.793 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:33:53.793 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:33:53.793 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:33:54.052 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:33:54.052 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:33:54.052 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:33:54.052 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:33:54.052 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:33:54.052 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:33:54.052 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:33:54.052 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:33:54.310 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:33:54.310 [59/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:33:54.310 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:33:54.310 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:33:54.310 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:33:54.310 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:33:54.310 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:33:54.569 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:33:54.569 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:33:54.569 [67/265] Linking target lib/librte_telemetry.so.24.0 00:33:54.569 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:33:54.569 [69/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:33:54.569 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:33:54.827 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:33:54.827 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:33:54.827 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:33:54.827 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:33:54.827 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:33:54.827 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:33:54.827 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:33:54.827 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:33:55.086 [79/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:33:55.086 [80/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:33:55.345 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:33:55.345 [82/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:33:55.345 [83/265] Linking static target lib/librte_ring.a 00:33:55.345 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:33:55.345 [85/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:33:55.345 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:33:55.603 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:33:55.603 [88/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:33:55.603 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:33:55.603 [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:33:55.862 [91/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:33:55.862 [92/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:33:55.862 [93/265] Linking static target lib/librte_eal.a 00:33:55.862 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:33:55.862 [95/265] Linking static target lib/librte_mempool.a 00:33:55.862 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:33:55.862 [97/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:33:55.862 [98/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:33:56.121 [99/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:33:56.121 [100/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:33:56.121 [101/265] Linking static target lib/librte_rcu.a 00:33:56.121 [102/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:33:56.121 [103/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:33:56.380 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:33:56.380 [105/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:33:56.380 [106/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:33:56.380 [107/265] Linking static target lib/librte_net.a 00:33:56.380 [108/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:33:56.380 [109/265] Linking static target lib/librte_meter.a 00:33:56.380 [110/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:33:56.639 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:33:56.639 [112/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:33:56.639 [113/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:33:56.639 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:33:56.639 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:33:56.897 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:33:57.209 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:33:57.209 [118/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:33:57.209 [119/265] Linking static target lib/librte_mbuf.a 00:33:57.209 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:33:57.469 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:33:57.727 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:33:57.727 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:33:57.727 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:33:57.727 [125/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:33:57.727 [126/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:33:57.727 [127/265] Linking static target lib/librte_pci.a 00:33:57.727 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:33:57.727 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:33:57.986 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:33:57.986 [131/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:33:57.986 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:33:57.986 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:33:57.986 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:33:57.986 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:33:57.986 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:33:58.245 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:33:58.245 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:33:58.245 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:33:58.245 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:33:58.245 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:33:58.245 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:33:58.504 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:33:58.504 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:33:58.504 [145/265] Linking static target lib/librte_cmdline.a 00:33:58.504 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:33:58.762 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:33:59.021 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:33:59.021 [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:33:59.021 [150/265] Linking static target lib/librte_timer.a 00:33:59.021 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:33:59.021 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:33:59.021 [153/265] Linking static target lib/librte_compressdev.a 00:33:59.280 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:33:59.280 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:33:59.280 [156/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:33:59.280 [157/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:33:59.280 [158/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:33:59.539 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:33:59.539 [160/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:33:59.539 [161/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:33:59.539 [162/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:33:59.798 [163/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:34:00.056 [164/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:34:00.056 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:34:00.056 [166/265] Linking static target lib/librte_dmadev.a 00:34:00.056 [167/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:34:00.315 [168/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:34:00.315 [169/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:34:00.315 [170/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:34:00.315 [171/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:00.315 [172/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:34:00.574 [173/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:34:00.574 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:34:00.843 [175/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:34:00.843 [176/265] Linking static target lib/librte_power.a 00:34:00.843 [177/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:34:00.843 [178/265] Linking static target lib/librte_reorder.a 00:34:01.108 [179/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:34:01.108 [180/265] Linking static target lib/librte_security.a 00:34:01.108 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:34:01.108 [182/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:34:01.108 [183/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:34:01.367 [184/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:34:01.367 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:34:01.367 [186/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:34:01.626 [187/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:34:01.626 [188/265] Linking static target lib/librte_ethdev.a 00:34:01.626 [189/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:34:01.626 [190/265] Linking static target lib/librte_cryptodev.a 00:34:01.884 [191/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:34:02.143 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:34:02.143 [193/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:34:02.144 [194/265] Linking static target lib/librte_hash.a 00:34:02.144 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:34:02.144 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:34:02.712 [197/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:34:02.712 [198/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:34:02.712 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:34:02.712 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:34:02.971 [201/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:02.971 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:34:03.229 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:34:03.229 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:34:03.488 [205/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:34:03.488 [206/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:34:03.488 [207/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:34:03.488 [208/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:34:03.488 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:34:03.488 [210/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:34:03.488 [211/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:34:03.488 [212/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:34:03.488 [213/265] Linking static target drivers/librte_bus_vdev.a 00:34:03.747 [214/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:34:03.747 [215/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:34:03.747 [216/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:34:03.747 [217/265] Linking static target drivers/librte_bus_pci.a 00:34:03.747 [218/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:03.747 [219/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:34:03.747 [220/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:34:04.006 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:34:04.006 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:34:04.006 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:34:04.006 [224/265] Linking static target drivers/librte_mempool_ring.a 00:34:04.264 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:34:07.550 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:11.735 [227/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:34:12.302 [228/265] Linking target lib/librte_eal.so.24.0 00:34:12.302 lto-wrapper: warning: using serial compilation of 5 LTRANS jobs 00:34:12.302 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:34:12.561 [230/265] Linking target lib/librte_meter.so.24.0 00:34:12.561 [231/265] Linking target lib/librte_pci.so.24.0 00:34:12.561 [232/265] Linking target lib/librte_ring.so.24.0 00:34:12.819 [233/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:34:12.819 [234/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:34:12.819 [235/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:34:12.819 [236/265] Linking target drivers/librte_bus_vdev.so.24.0 00:34:13.078 [237/265] Linking target lib/librte_timer.so.24.0 00:34:13.078 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:34:13.078 [239/265] Linking target lib/librte_dmadev.so.24.0 00:34:13.336 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:34:13.594 [241/265] Linking target lib/librte_rcu.so.24.0 00:34:13.594 [242/265] Linking target lib/librte_mempool.so.24.0 00:34:13.594 [243/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:34:13.853 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:34:14.112 [245/265] Linking target drivers/librte_bus_pci.so.24.0 00:34:14.370 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:34:15.788 [247/265] Linking target lib/librte_mbuf.so.24.0 00:34:15.788 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:34:16.354 [249/265] Linking target lib/librte_reorder.so.24.0 00:34:16.354 [250/265] Linking target lib/librte_compressdev.so.24.0 00:34:16.612 [251/265] Linking target lib/librte_net.so.24.0 00:34:16.870 [252/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:34:17.806 [253/265] Linking target lib/librte_cmdline.so.24.0 00:34:18.064 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:34:18.064 [255/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:34:18.630 [256/265] Linking target lib/librte_security.so.24.0 00:34:21.160 [257/265] Linking target lib/librte_hash.so.24.0 00:34:21.160 [258/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:34:27.722 [259/265] Linking target lib/librte_ethdev.so.24.0 00:34:27.722 lto-wrapper: warning: using serial compilation of 6 LTRANS jobs 00:34:27.722 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:34:29.625 [261/265] Linking target lib/librte_power.so.24.0 00:34:32.157 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:34:32.158 [263/265] Linking static target lib/librte_vhost.a 00:34:34.059 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:35:20.769 [265/265] Linking target lib/librte_vhost.so.24.0 00:35:20.769 lto-wrapper: warning: using serial compilation of 8 LTRANS jobs 00:35:20.769 INFO: autodetecting backend as ninja 00:35:20.769 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:35:20.769 CC lib/log/log_flags.o 00:35:20.769 CC lib/log/log.o 00:35:20.769 CC lib/ut_mock/mock.o 00:35:20.769 CC lib/log/log_deprecated.o 00:35:20.769 CC lib/ut/ut.o 00:35:20.769 LIB libspdk_ut_mock.a 00:35:20.769 LIB libspdk_log.a 00:35:20.769 LIB libspdk_ut.a 00:35:20.769 CC lib/dma/dma.o 00:35:20.769 CC lib/util/base64.o 00:35:20.769 CC lib/util/bit_array.o 00:35:20.769 CC lib/util/crc16.o 00:35:20.769 CC lib/ioat/ioat.o 00:35:20.769 CC lib/util/cpuset.o 00:35:20.769 CC lib/util/crc32.o 00:35:20.769 CXX lib/trace_parser/trace.o 00:35:20.769 CC lib/util/crc32c.o 00:35:20.769 CC lib/vfio_user/host/vfio_user_pci.o 00:35:20.769 CC lib/vfio_user/host/vfio_user.o 00:35:20.769 CC lib/util/crc32_ieee.o 00:35:20.769 CC lib/util/crc64.o 00:35:20.769 CC lib/util/dif.o 00:35:20.769 LIB libspdk_dma.a 00:35:20.769 LIB libspdk_ioat.a 00:35:20.769 CC lib/util/fd.o 00:35:20.769 CC lib/util/file.o 00:35:20.769 CC lib/util/hexlify.o 00:35:20.769 CC lib/util/iov.o 00:35:20.769 CC lib/util/math.o 00:35:20.769 CC lib/util/pipe.o 00:35:20.769 CC lib/util/strerror_tls.o 00:35:20.769 CC lib/util/string.o 00:35:20.769 LIB libspdk_vfio_user.a 00:35:20.769 CC lib/util/uuid.o 00:35:20.769 CC lib/util/fd_group.o 00:35:20.769 CC lib/util/xor.o 00:35:20.769 CC lib/util/zipf.o 00:35:20.769 LIB libspdk_util.a 00:35:20.769 LIB libspdk_trace_parser.a 00:35:20.769 CC lib/conf/conf.o 00:35:20.769 CC lib/env_dpdk/env.o 00:35:20.769 CC lib/env_dpdk/memory.o 00:35:20.769 CC lib/json/json_parse.o 00:35:20.769 CC lib/vmd/vmd.o 00:35:20.769 CC lib/env_dpdk/init.o 00:35:20.769 CC lib/env_dpdk/pci.o 00:35:20.769 CC lib/json/json_util.o 00:35:20.769 CC lib/rdma/common.o 00:35:20.769 CC lib/idxd/idxd.o 00:35:20.769 LIB libspdk_conf.a 00:35:20.769 CC lib/idxd/idxd_user.o 00:35:20.769 CC lib/rdma/rdma_verbs.o 00:35:20.769 CC lib/json/json_write.o 00:35:20.769 CC lib/env_dpdk/threads.o 00:35:20.769 CC lib/env_dpdk/pci_ioat.o 00:35:20.769 CC lib/env_dpdk/pci_virtio.o 00:35:20.769 CC lib/vmd/led.o 00:35:20.769 CC lib/env_dpdk/pci_vmd.o 00:35:20.769 CC lib/env_dpdk/pci_idxd.o 00:35:20.769 LIB libspdk_idxd.a 00:35:20.769 LIB libspdk_rdma.a 00:35:20.769 CC lib/env_dpdk/pci_event.o 00:35:20.769 CC lib/env_dpdk/sigbus_handler.o 00:35:20.769 LIB libspdk_json.a 00:35:20.769 CC lib/env_dpdk/pci_dpdk.o 00:35:20.769 CC lib/env_dpdk/pci_dpdk_2207.o 00:35:20.769 CC lib/env_dpdk/pci_dpdk_2211.o 00:35:20.769 LIB libspdk_vmd.a 00:35:20.770 CC lib/jsonrpc/jsonrpc_server.o 00:35:20.770 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:35:20.770 CC lib/jsonrpc/jsonrpc_client.o 00:35:20.770 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:35:20.770 LIB libspdk_jsonrpc.a 00:35:20.770 CC lib/rpc/rpc.o 00:35:20.770 LIB libspdk_rpc.a 00:35:20.770 LIB libspdk_env_dpdk.a 00:35:20.770 CC lib/notify/notify.o 00:35:20.770 CC lib/trace/trace_flags.o 00:35:20.770 CC lib/trace/trace_rpc.o 00:35:20.770 CC lib/notify/notify_rpc.o 00:35:20.770 CC lib/trace/trace.o 00:35:20.770 CC lib/sock/sock.o 00:35:20.770 CC lib/sock/sock_rpc.o 00:35:20.770 LIB libspdk_notify.a 00:35:20.770 LIB libspdk_trace.a 00:35:20.770 LIB libspdk_sock.a 00:35:20.770 CC lib/thread/iobuf.o 00:35:20.770 CC lib/thread/thread.o 00:35:20.770 CC lib/nvme/nvme_ctrlr_cmd.o 00:35:20.770 CC lib/nvme/nvme_ctrlr.o 00:35:20.770 CC lib/nvme/nvme_fabric.o 00:35:20.770 CC lib/nvme/nvme_ns.o 00:35:20.770 CC lib/nvme/nvme_pcie_common.o 00:35:20.770 CC lib/nvme/nvme_pcie.o 00:35:20.770 CC lib/nvme/nvme_ns_cmd.o 00:35:20.770 CC lib/nvme/nvme_qpair.o 00:35:20.770 CC lib/nvme/nvme.o 00:35:20.770 CC lib/nvme/nvme_quirks.o 00:35:20.770 LIB libspdk_thread.a 00:35:20.770 CC lib/nvme/nvme_transport.o 00:35:20.770 CC lib/nvme/nvme_discovery.o 00:35:20.770 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:35:20.770 CC lib/accel/accel.o 00:35:20.770 CC lib/blob/blobstore.o 00:35:20.770 CC lib/init/json_config.o 00:35:20.770 CC lib/blob/request.o 00:35:20.770 CC lib/init/subsystem.o 00:35:20.770 CC lib/init/subsystem_rpc.o 00:35:20.770 CC lib/blob/zeroes.o 00:35:20.770 CC lib/blob/blob_bs_dev.o 00:35:20.770 CC lib/init/rpc.o 00:35:20.770 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:35:20.770 CC lib/nvme/nvme_tcp.o 00:35:20.770 CC lib/nvme/nvme_opal.o 00:35:20.770 CC lib/virtio/virtio.o 00:35:20.770 CC lib/nvme/nvme_io_msg.o 00:35:20.770 CC lib/accel/accel_rpc.o 00:35:20.770 CC lib/accel/accel_sw.o 00:35:20.770 CC lib/nvme/nvme_poll_group.o 00:35:20.770 LIB libspdk_init.a 00:35:20.770 CC lib/nvme/nvme_zns.o 00:35:20.770 CC lib/virtio/virtio_vhost_user.o 00:35:20.770 CC lib/nvme/nvme_cuse.o 00:35:20.770 LIB libspdk_accel.a 00:35:20.770 CC lib/nvme/nvme_vfio_user.o 00:35:20.770 CC lib/virtio/virtio_vfio_user.o 00:35:20.770 CC lib/virtio/virtio_pci.o 00:35:20.770 CC lib/nvme/nvme_rdma.o 00:35:20.770 CC lib/event/app.o 00:35:20.770 CC lib/event/reactor.o 00:35:20.770 CC lib/event/log_rpc.o 00:35:20.770 CC lib/event/app_rpc.o 00:35:20.770 LIB libspdk_virtio.a 00:35:20.770 CC lib/event/scheduler_static.o 00:35:20.770 CC lib/bdev/bdev.o 00:35:20.770 CC lib/bdev/bdev_rpc.o 00:35:20.770 CC lib/bdev/bdev_zone.o 00:35:20.770 CC lib/bdev/part.o 00:35:20.770 CC lib/bdev/scsi_nvme.o 00:35:20.770 LIB libspdk_event.a 00:35:20.770 LIB libspdk_blob.a 00:35:20.770 CC lib/lvol/lvol.o 00:35:20.770 CC lib/blobfs/blobfs.o 00:35:20.770 CC lib/blobfs/tree.o 00:35:20.770 LIB libspdk_nvme.a 00:35:20.770 LIB libspdk_blobfs.a 00:35:20.770 LIB libspdk_lvol.a 00:35:20.770 LIB libspdk_bdev.a 00:35:20.770 CC lib/nvmf/ctrlr.o 00:35:20.770 CC lib/nvmf/ctrlr_discovery.o 00:35:20.770 CC lib/nvmf/subsystem.o 00:35:20.770 CC lib/nvmf/nvmf.o 00:35:20.770 CC lib/nvmf/ctrlr_bdev.o 00:35:20.770 CC lib/nvmf/nvmf_rpc.o 00:35:20.770 CC lib/scsi/lun.o 00:35:20.770 CC lib/nbd/nbd.o 00:35:20.770 CC lib/scsi/dev.o 00:35:20.770 CC lib/ftl/ftl_core.o 00:35:20.770 CC lib/ftl/ftl_init.o 00:35:20.770 CC lib/ftl/ftl_layout.o 00:35:20.770 CC lib/scsi/port.o 00:35:20.770 CC lib/scsi/scsi.o 00:35:20.770 CC lib/nbd/nbd_rpc.o 00:35:20.770 CC lib/scsi/scsi_bdev.o 00:35:20.770 CC lib/scsi/scsi_pr.o 00:35:20.770 CC lib/ftl/ftl_debug.o 00:35:20.770 CC lib/nvmf/transport.o 00:35:20.770 CC lib/scsi/scsi_rpc.o 00:35:20.770 CC lib/ftl/ftl_io.o 00:35:20.770 LIB libspdk_nbd.a 00:35:20.770 CC lib/ftl/ftl_sb.o 00:35:20.770 CC lib/ftl/ftl_l2p.o 00:35:20.770 CC lib/ftl/ftl_l2p_flat.o 00:35:20.770 CC lib/ftl/ftl_nv_cache.o 00:35:20.770 CC lib/nvmf/tcp.o 00:35:20.770 CC lib/nvmf/rdma.o 00:35:20.770 CC lib/ftl/ftl_band.o 00:35:20.770 CC lib/scsi/task.o 00:35:20.770 CC lib/ftl/ftl_band_ops.o 00:35:21.029 CC lib/ftl/ftl_writer.o 00:35:21.029 CC lib/ftl/ftl_rq.o 00:35:21.029 CC lib/ftl/ftl_reloc.o 00:35:21.029 CC lib/ftl/ftl_l2p_cache.o 00:35:21.029 LIB libspdk_scsi.a 00:35:21.029 CC lib/ftl/ftl_p2l.o 00:35:21.029 CC lib/ftl/mngt/ftl_mngt.o 00:35:21.029 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:35:21.029 CC lib/iscsi/conn.o 00:35:21.029 CC lib/iscsi/init_grp.o 00:35:21.029 CC lib/vhost/vhost.o 00:35:21.287 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:35:21.287 CC lib/iscsi/iscsi.o 00:35:21.287 CC lib/ftl/mngt/ftl_mngt_startup.o 00:35:21.287 CC lib/vhost/vhost_rpc.o 00:35:21.287 CC lib/vhost/vhost_scsi.o 00:35:21.287 CC lib/iscsi/md5.o 00:35:21.287 CC lib/iscsi/param.o 00:35:21.287 CC lib/iscsi/portal_grp.o 00:35:21.287 CC lib/ftl/mngt/ftl_mngt_md.o 00:35:21.287 CC lib/ftl/mngt/ftl_mngt_misc.o 00:35:21.546 CC lib/iscsi/tgt_node.o 00:35:21.546 CC lib/iscsi/iscsi_subsystem.o 00:35:21.546 CC lib/vhost/vhost_blk.o 00:35:21.546 LIB libspdk_nvmf.a 00:35:21.546 CC lib/iscsi/iscsi_rpc.o 00:35:21.546 CC lib/iscsi/task.o 00:35:21.546 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:35:21.546 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:35:21.546 CC lib/ftl/mngt/ftl_mngt_band.o 00:35:21.805 CC lib/vhost/rte_vhost_user.o 00:35:21.805 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:35:21.805 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:35:21.805 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:35:21.805 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:35:21.805 LIB libspdk_iscsi.a 00:35:21.805 CC lib/ftl/utils/ftl_conf.o 00:35:21.805 CC lib/ftl/utils/ftl_md.o 00:35:21.805 CC lib/ftl/utils/ftl_mempool.o 00:35:21.805 CC lib/ftl/utils/ftl_bitmap.o 00:35:21.805 CC lib/ftl/utils/ftl_property.o 00:35:21.805 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:35:22.063 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:35:22.063 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:35:22.063 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:35:22.063 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:35:22.063 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:35:22.063 CC lib/ftl/upgrade/ftl_sb_v3.o 00:35:22.063 CC lib/ftl/upgrade/ftl_sb_v5.o 00:35:22.063 CC lib/ftl/nvc/ftl_nvc_dev.o 00:35:22.063 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:35:22.063 CC lib/ftl/base/ftl_base_dev.o 00:35:22.063 CC lib/ftl/base/ftl_base_bdev.o 00:35:22.322 LIB libspdk_ftl.a 00:35:22.322 LIB libspdk_vhost.a 00:35:22.581 CC module/env_dpdk/env_dpdk_rpc.o 00:35:22.581 CC module/blob/bdev/blob_bdev.o 00:35:22.581 CC module/scheduler/gscheduler/gscheduler.o 00:35:22.581 CC module/scheduler/dynamic/scheduler_dynamic.o 00:35:22.581 CC module/accel/error/accel_error.o 00:35:22.581 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:35:22.581 CC module/accel/ioat/accel_ioat.o 00:35:22.581 CC module/accel/iaa/accel_iaa.o 00:35:22.581 CC module/accel/dsa/accel_dsa.o 00:35:22.581 CC module/sock/posix/posix.o 00:35:22.581 LIB libspdk_env_dpdk_rpc.a 00:35:22.581 CC module/accel/iaa/accel_iaa_rpc.o 00:35:22.581 LIB libspdk_scheduler_gscheduler.a 00:35:22.581 LIB libspdk_scheduler_dpdk_governor.a 00:35:22.581 CC module/accel/error/accel_error_rpc.o 00:35:22.581 CC module/accel/dsa/accel_dsa_rpc.o 00:35:22.581 CC module/accel/ioat/accel_ioat_rpc.o 00:35:22.581 LIB libspdk_scheduler_dynamic.a 00:35:22.581 LIB libspdk_blob_bdev.a 00:35:22.840 LIB libspdk_accel_iaa.a 00:35:22.840 LIB libspdk_accel_dsa.a 00:35:22.840 LIB libspdk_accel_error.a 00:35:22.840 LIB libspdk_accel_ioat.a 00:35:22.840 CC module/bdev/delay/vbdev_delay.o 00:35:22.840 CC module/bdev/error/vbdev_error.o 00:35:22.840 CC module/blobfs/bdev/blobfs_bdev.o 00:35:22.840 CC module/bdev/lvol/vbdev_lvol.o 00:35:22.840 CC module/bdev/malloc/bdev_malloc.o 00:35:22.840 CC module/bdev/gpt/gpt.o 00:35:22.840 CC module/bdev/nvme/bdev_nvme.o 00:35:22.840 CC module/bdev/passthru/vbdev_passthru.o 00:35:22.840 CC module/bdev/null/bdev_null.o 00:35:22.840 CC module/bdev/gpt/vbdev_gpt.o 00:35:22.840 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:35:23.099 LIB libspdk_sock_posix.a 00:35:23.099 CC module/bdev/error/vbdev_error_rpc.o 00:35:23.099 CC module/bdev/null/bdev_null_rpc.o 00:35:23.099 CC module/bdev/malloc/bdev_malloc_rpc.o 00:35:23.099 CC module/bdev/delay/vbdev_delay_rpc.o 00:35:23.099 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:35:23.099 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:35:23.099 LIB libspdk_blobfs_bdev.a 00:35:23.099 CC module/bdev/nvme/bdev_nvme_rpc.o 00:35:23.099 LIB libspdk_bdev_error.a 00:35:23.099 CC module/bdev/nvme/nvme_rpc.o 00:35:23.099 LIB libspdk_bdev_gpt.a 00:35:23.099 LIB libspdk_bdev_null.a 00:35:23.099 CC module/bdev/nvme/bdev_mdns_client.o 00:35:23.099 LIB libspdk_bdev_malloc.a 00:35:23.099 LIB libspdk_bdev_delay.a 00:35:23.099 CC module/bdev/nvme/vbdev_opal.o 00:35:23.099 LIB libspdk_bdev_passthru.a 00:35:23.099 CC module/bdev/nvme/vbdev_opal_rpc.o 00:35:23.099 CC module/bdev/raid/bdev_raid.o 00:35:23.357 CC module/bdev/zone_block/vbdev_zone_block.o 00:35:23.357 CC module/bdev/split/vbdev_split.o 00:35:23.357 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:35:23.357 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:35:23.357 LIB libspdk_bdev_lvol.a 00:35:23.357 CC module/bdev/aio/bdev_aio.o 00:35:23.357 CC module/bdev/aio/bdev_aio_rpc.o 00:35:23.357 CC module/bdev/split/vbdev_split_rpc.o 00:35:23.357 CC module/bdev/ftl/bdev_ftl.o 00:35:23.357 CC module/bdev/iscsi/bdev_iscsi.o 00:35:23.357 CC module/bdev/ftl/bdev_ftl_rpc.o 00:35:23.357 LIB libspdk_bdev_zone_block.a 00:35:23.357 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:35:23.616 CC module/bdev/virtio/bdev_virtio_scsi.o 00:35:23.616 CC module/bdev/virtio/bdev_virtio_blk.o 00:35:23.616 LIB libspdk_bdev_split.a 00:35:23.616 CC module/bdev/virtio/bdev_virtio_rpc.o 00:35:23.616 CC module/bdev/raid/bdev_raid_rpc.o 00:35:23.616 LIB libspdk_bdev_aio.a 00:35:23.616 CC module/bdev/raid/bdev_raid_sb.o 00:35:23.616 LIB libspdk_bdev_ftl.a 00:35:23.616 CC module/bdev/raid/raid0.o 00:35:23.616 CC module/bdev/raid/raid1.o 00:35:23.616 CC module/bdev/raid/concat.o 00:35:23.616 LIB libspdk_bdev_iscsi.a 00:35:23.616 CC module/bdev/raid/raid5f.o 00:35:23.874 LIB libspdk_bdev_nvme.a 00:35:23.874 LIB libspdk_bdev_virtio.a 00:35:23.874 LIB libspdk_bdev_raid.a 00:35:24.133 CC module/event/subsystems/scheduler/scheduler.o 00:35:24.133 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:35:24.133 CC module/event/subsystems/iobuf/iobuf.o 00:35:24.133 CC module/event/subsystems/vmd/vmd.o 00:35:24.133 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:35:24.133 CC module/event/subsystems/vmd/vmd_rpc.o 00:35:24.133 CC module/event/subsystems/sock/sock.o 00:35:24.133 LIB libspdk_event_sock.a 00:35:24.133 LIB libspdk_event_scheduler.a 00:35:24.133 LIB libspdk_event_vmd.a 00:35:24.133 LIB libspdk_event_vhost_blk.a 00:35:24.133 LIB libspdk_event_iobuf.a 00:35:24.390 CC module/event/subsystems/accel/accel.o 00:35:24.390 LIB libspdk_event_accel.a 00:35:24.648 CC module/event/subsystems/bdev/bdev.o 00:35:24.648 LIB libspdk_event_bdev.a 00:35:24.906 CC module/event/subsystems/scsi/scsi.o 00:35:24.906 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:35:24.906 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:35:24.906 CC module/event/subsystems/nbd/nbd.o 00:35:24.906 LIB libspdk_event_scsi.a 00:35:24.906 LIB libspdk_event_nbd.a 00:35:25.164 LIB libspdk_event_nvmf.a 00:35:25.164 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:35:25.164 CC module/event/subsystems/iscsi/iscsi.o 00:35:25.164 LIB libspdk_event_vhost_scsi.a 00:35:25.164 LIB libspdk_event_iscsi.a 00:35:25.423 CXX app/trace/trace.o 00:35:25.423 CC app/trace_record/trace_record.o 00:35:25.423 CC app/spdk_lspci/spdk_lspci.o 00:35:25.423 CC app/nvmf_tgt/nvmf_main.o 00:35:25.423 CC app/iscsi_tgt/iscsi_tgt.o 00:35:25.423 CC examples/blob/hello_world/hello_blob.o 00:35:25.423 CC app/spdk_tgt/spdk_tgt.o 00:35:25.423 CC test/accel/dif/dif.o 00:35:25.423 CC examples/accel/perf/accel_perf.o 00:35:25.423 CC examples/bdev/hello_world/hello_bdev.o 00:35:25.423 LINK spdk_lspci 00:35:25.681 LINK spdk_trace_record 00:35:25.681 LINK nvmf_tgt 00:35:25.681 LINK iscsi_tgt 00:35:25.681 LINK spdk_trace 00:35:25.681 LINK hello_blob 00:35:25.681 LINK spdk_tgt 00:35:25.682 LINK hello_bdev 00:35:25.682 LINK accel_perf 00:35:25.940 LINK dif 00:35:31.211 CC examples/blob/cli/blobcli.o 00:35:31.470 LINK blobcli 00:35:39.585 CC examples/ioat/perf/perf.o 00:35:40.521 LINK ioat_perf 00:36:12.690 CC test/app/bdev_svc/bdev_svc.o 00:36:12.690 CC test/bdev/bdevio/bdevio.o 00:36:12.690 LINK bdev_svc 00:36:12.949 LINK bdevio 00:36:25.154 CC examples/ioat/verify/verify.o 00:36:26.091 LINK verify 00:37:12.766 CC examples/nvme/hello_world/hello_world.o 00:37:12.766 LINK hello_world 00:38:08.987 CC examples/bdev/bdevperf/bdevperf.o 00:38:08.987 LINK bdevperf 00:38:08.987 CC examples/sock/hello_world/hello_sock.o 00:38:08.987 LINK hello_sock 00:38:08.987 CC examples/vmd/lsvmd/lsvmd.o 00:38:08.987 LINK lsvmd 00:38:14.256 CC examples/nvme/reconnect/reconnect.o 00:38:14.256 CC examples/nvme/nvme_manage/nvme_manage.o 00:38:14.823 LINK reconnect 00:38:15.391 CC examples/nvmf/nvmf/nvmf.o 00:38:15.391 LINK nvme_manage 00:38:16.768 LINK nvmf 00:38:19.299 CC examples/util/zipf/zipf.o 00:38:19.557 LINK zipf 00:38:51.647 CC app/spdk_nvme_perf/perf.o 00:38:51.647 LINK spdk_nvme_perf 00:38:51.647 CC examples/nvme/arbitration/arbitration.o 00:38:52.214 CC examples/nvme/hotplug/hotplug.o 00:38:52.473 LINK arbitration 00:38:53.410 LINK hotplug 00:38:53.410 CC examples/vmd/led/led.o 00:38:53.977 LINK led 00:38:54.544 CC examples/nvme/cmb_copy/cmb_copy.o 00:38:55.480 LINK cmb_copy 00:38:57.382 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:38:58.762 LINK nvme_fuzz 00:39:01.295 CC test/blobfs/mkfs/mkfs.o 00:39:02.671 LINK mkfs 00:39:10.820 TEST_HEADER include/spdk/config.h 00:39:10.820 CXX test/cpp_headers/accel.o 00:39:12.193 CXX test/cpp_headers/accel_module.o 00:39:13.129 CXX test/cpp_headers/assert.o 00:39:14.504 CXX test/cpp_headers/barrier.o 00:39:15.880 CXX test/cpp_headers/base64.o 00:39:17.255 CXX test/cpp_headers/bdev.o 00:39:19.158 CXX test/cpp_headers/bdev_module.o 00:39:20.535 CXX test/cpp_headers/bdev_zone.o 00:39:22.437 CXX test/cpp_headers/bit_array.o 00:39:23.815 CXX test/cpp_headers/bit_pool.o 00:39:25.191 CXX test/cpp_headers/blob.o 00:39:26.566 CXX test/cpp_headers/blob_bdev.o 00:39:28.469 CXX test/cpp_headers/blobfs.o 00:39:29.405 CXX test/cpp_headers/blobfs_bdev.o 00:39:31.308 CXX test/cpp_headers/conf.o 00:39:32.244 CXX test/cpp_headers/config.o 00:39:32.502 CXX test/cpp_headers/cpuset.o 00:39:33.906 CXX test/cpp_headers/crc16.o 00:39:35.808 CXX test/cpp_headers/crc32.o 00:39:37.184 CXX test/cpp_headers/crc64.o 00:39:38.119 CXX test/cpp_headers/dif.o 00:39:39.495 CXX test/cpp_headers/dma.o 00:39:40.431 CXX test/cpp_headers/endian.o 00:39:41.807 CXX test/cpp_headers/env.o 00:39:42.742 CXX test/cpp_headers/env_dpdk.o 00:39:43.678 CXX test/cpp_headers/event.o 00:39:44.244 CXX test/cpp_headers/fd.o 00:39:45.180 CXX test/cpp_headers/fd_group.o 00:39:45.748 CXX test/cpp_headers/file.o 00:39:46.315 CXX test/cpp_headers/ftl.o 00:39:46.315 CC test/dma/test_dma/test_dma.o 00:39:46.574 CXX test/cpp_headers/gpt_spec.o 00:39:47.141 CXX test/cpp_headers/hexlify.o 00:39:47.400 CXX test/cpp_headers/histogram_data.o 00:39:47.658 LINK test_dma 00:39:47.658 CC examples/thread/thread/thread_ex.o 00:39:48.226 CC test/app/histogram_perf/histogram_perf.o 00:39:48.226 CXX test/cpp_headers/idxd.o 00:39:48.793 LINK histogram_perf 00:39:48.793 LINK thread 00:39:48.793 CXX test/cpp_headers/idxd_spec.o 00:39:49.052 CC app/spdk_nvme_identify/identify.o 00:39:49.619 CXX test/cpp_headers/init.o 00:39:50.186 CXX test/cpp_headers/ioat.o 00:39:50.752 CXX test/cpp_headers/ioat_spec.o 00:39:50.752 LINK spdk_nvme_identify 00:39:51.318 CC examples/nvme/abort/abort.o 00:39:51.577 CXX test/cpp_headers/iscsi_spec.o 00:39:52.175 CXX test/cpp_headers/json.o 00:39:52.175 LINK abort 00:39:52.742 CXX test/cpp_headers/jsonrpc.o 00:39:53.309 CXX test/cpp_headers/likely.o 00:39:54.245 CXX test/cpp_headers/log.o 00:39:54.812 CXX test/cpp_headers/lvol.o 00:39:55.747 CXX test/cpp_headers/memory.o 00:39:56.317 CXX test/cpp_headers/mmio.o 00:39:57.255 CXX test/cpp_headers/nbd.o 00:39:57.255 CXX test/cpp_headers/notify.o 00:39:58.191 CXX test/cpp_headers/nvme.o 00:39:59.566 CXX test/cpp_headers/nvme_intel.o 00:40:00.134 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:40:00.134 CXX test/cpp_headers/nvme_ocssd.o 00:40:01.509 CXX test/cpp_headers/nvme_ocssd_spec.o 00:40:02.444 CXX test/cpp_headers/nvme_spec.o 00:40:03.380 CXX test/cpp_headers/nvme_zns.o 00:40:03.947 LINK iscsi_fuzz 00:40:04.515 CXX test/cpp_headers/nvmf.o 00:40:05.891 CXX test/cpp_headers/nvmf_cmd.o 00:40:07.266 CXX test/cpp_headers/nvmf_fc_spec.o 00:40:08.642 CXX test/cpp_headers/nvmf_spec.o 00:40:10.017 CXX test/cpp_headers/nvmf_transport.o 00:40:10.952 CXX test/cpp_headers/opal.o 00:40:12.331 CXX test/cpp_headers/opal_spec.o 00:40:13.300 CXX test/cpp_headers/pci_ids.o 00:40:13.558 CXX test/cpp_headers/pipe.o 00:40:14.493 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:40:14.752 CXX test/cpp_headers/queue.o 00:40:15.010 CXX test/cpp_headers/reduce.o 00:40:15.577 LINK pmr_persistence 00:40:16.143 CXX test/cpp_headers/rpc.o 00:40:17.519 CXX test/cpp_headers/scheduler.o 00:40:18.451 CXX test/cpp_headers/scsi.o 00:40:19.827 CXX test/cpp_headers/scsi_spec.o 00:40:19.827 CC test/env/mem_callbacks/mem_callbacks.o 00:40:20.763 CXX test/cpp_headers/sock.o 00:40:22.138 CXX test/cpp_headers/stdinc.o 00:40:23.073 CXX test/cpp_headers/string.o 00:40:23.331 LINK mem_callbacks 00:40:24.266 CXX test/cpp_headers/thread.o 00:40:25.641 CXX test/cpp_headers/trace.o 00:40:27.019 CXX test/cpp_headers/trace_parser.o 00:40:28.395 CXX test/cpp_headers/tree.o 00:40:28.654 CXX test/cpp_headers/ublk.o 00:40:30.032 CXX test/cpp_headers/util.o 00:40:31.428 CXX test/cpp_headers/uuid.o 00:40:32.809 CXX test/cpp_headers/version.o 00:40:33.075 CXX test/cpp_headers/vfio_user_pci.o 00:40:34.451 CXX test/cpp_headers/vfio_user_spec.o 00:40:35.387 CXX test/cpp_headers/vhost.o 00:40:36.763 CXX test/cpp_headers/vmd.o 00:40:37.698 CXX test/cpp_headers/xor.o 00:40:38.266 CXX test/cpp_headers/zipf.o 00:40:39.642 CC test/env/vtophys/vtophys.o 00:40:40.209 LINK vtophys 00:40:43.496 CC app/spdk_nvme_discover/discovery_aer.o 00:40:44.063 LINK spdk_nvme_discover 00:40:47.348 CC app/spdk_top/spdk_top.o 00:40:48.723 LINK spdk_top 00:40:49.660 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:40:50.595 LINK env_dpdk_post_init 00:41:00.568 CC test/app/jsoncat/jsoncat.o 00:41:00.826 LINK jsoncat 00:41:03.393 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:41:03.960 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:41:05.862 LINK vhost_fuzz 00:41:13.974 CC test/app/stub/stub.o 00:41:14.908 LINK stub 00:41:14.908 CC test/env/memory/memory_ut.o 00:41:16.810 CC app/vhost/vhost.o 00:41:17.377 LINK vhost 00:41:19.280 LINK memory_ut 00:41:23.466 CC app/spdk_dd/spdk_dd.o 00:41:24.840 LINK spdk_dd 00:41:27.373 CC app/fio/nvme/fio_plugin.o 00:41:29.275 LINK spdk_nvme 00:41:30.669 CC examples/idxd/perf/perf.o 00:41:30.669 CC examples/interrupt_tgt/interrupt_tgt.o 00:41:31.618 LINK interrupt_tgt 00:41:31.618 LINK idxd_perf 00:41:33.520 CC test/env/pci/pci_ut.o 00:41:34.456 LINK pci_ut 00:41:39.769 CC app/fio/bdev/fio_plugin.o 00:41:42.302 LINK spdk_bdev 00:41:54.505 CC test/event/event_perf/event_perf.o 00:41:54.764 LINK event_perf 00:41:55.332 CC test/lvol/esnap/esnap.o 00:42:00.600 CC test/event/reactor/reactor.o 00:42:00.600 LINK reactor 00:42:00.858 CC test/nvme/aer/aer.o 00:42:02.233 LINK aer 00:42:04.766 CC test/rpc_client/rpc_client_test.o 00:42:05.364 LINK rpc_client_test 00:42:07.898 LINK esnap 00:42:39.973 CC test/event/reactor_perf/reactor_perf.o 00:42:40.909 LINK reactor_perf 00:42:42.813 CC test/thread/poller_perf/poller_perf.o 00:42:43.381 LINK poller_perf 00:42:49.947 CC test/event/app_repeat/app_repeat.o 00:42:49.948 CC test/event/scheduler/scheduler.o 00:42:49.948 LINK app_repeat 00:42:50.516 LINK scheduler 00:43:02.724 CC test/thread/lock/spdk_lock.o 00:43:06.975 LINK spdk_lock 00:43:10.261 CC test/nvme/reset/reset.o 00:43:11.637 LINK reset 00:43:23.841 CC test/nvme/sgl/sgl.o 00:43:23.841 LINK sgl 00:43:25.246 CC test/nvme/e2edp/nvme_dp.o 00:43:26.624 LINK nvme_dp 00:43:53.171 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:43:53.171 LINK histogram_ut 00:43:53.171 CC test/nvme/overhead/overhead.o 00:43:53.171 CC test/unit/lib/accel/accel.c/accel_ut.o 00:43:53.739 LINK overhead 00:43:55.117 CC test/nvme/err_injection/err_injection.o 00:43:55.703 LINK err_injection 00:43:57.620 LINK accel_ut 00:44:00.906 CC test/nvme/startup/startup.o 00:44:01.473 LINK startup 00:44:05.663 CC test/nvme/reserve/reserve.o 00:44:07.040 LINK reserve 00:44:12.312 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:44:12.572 CC test/nvme/simple_copy/simple_copy.o 00:44:13.953 LINK simple_copy 00:44:19.225 CC test/unit/lib/bdev/part.c/part_ut.o 00:44:29.203 LINK part_ut 00:44:29.203 LINK bdev_ut 00:44:47.340 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:44:47.340 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:44:47.340 LINK scsi_nvme_ut 00:44:47.340 LINK gpt_ut 00:44:47.340 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:44:47.599 CC test/nvme/connect_stress/connect_stress.o 00:44:47.858 LINK blob_bdev_ut 00:44:48.117 LINK connect_stress 00:44:49.496 CC test/nvme/boot_partition/boot_partition.o 00:44:50.066 LINK boot_partition 00:44:50.325 CC test/nvme/compliance/nvme_compliance.o 00:44:51.261 CC test/nvme/fused_ordering/fused_ordering.o 00:44:51.261 LINK nvme_compliance 00:44:51.828 LINK fused_ordering 00:44:52.765 CC test/unit/lib/blob/blob.c/blob_ut.o 00:44:52.765 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:44:53.703 LINK vbdev_lvol_ut 00:44:54.639 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:44:54.899 LINK tree_ut 00:44:58.186 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:45:00.091 CC test/unit/lib/dma/dma.c/dma_ut.o 00:45:00.659 CC test/nvme/doorbell_aers/doorbell_aers.o 00:45:00.918 LINK dma_ut 00:45:01.180 LINK blobfs_async_ut 00:45:01.438 LINK doorbell_aers 00:45:02.373 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:45:03.750 LINK blob_ut 00:45:04.734 CC test/unit/lib/event/app.c/app_ut.o 00:45:06.638 LINK app_ut 00:45:11.911 LINK bdev_ut 00:45:18.478 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:45:18.478 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:45:21.013 LINK reactor_ut 00:45:21.950 LINK blobfs_sync_ut 00:45:24.484 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:45:25.050 LINK blobfs_bdev_ut 00:45:26.427 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:45:26.427 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:45:26.686 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:45:26.944 LINK ioat_ut 00:45:27.511 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:45:27.511 LINK bdev_raid_sb_ut 00:45:27.798 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:45:28.364 LINK init_grp_ut 00:45:28.623 LINK conn_ut 00:45:28.623 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:45:28.623 LINK bdev_raid_ut 00:45:28.882 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:45:29.141 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:45:29.400 LINK concat_ut 00:45:29.400 LINK raid1_ut 00:45:29.658 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:45:29.917 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:45:30.484 LINK raid5f_ut 00:45:30.762 LINK json_util_ut 00:45:31.033 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:45:31.599 LINK json_parse_ut 00:45:31.858 LINK json_write_ut 00:45:32.794 CC test/nvme/fdp/fdp.o 00:45:33.053 LINK fdp 00:45:33.053 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:45:33.312 CC test/unit/lib/iscsi/param.c/param_ut.o 00:45:33.571 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:45:33.830 LINK param_ut 00:45:33.830 LINK bdev_zone_ut 00:45:33.830 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:45:34.089 LINK jsonrpc_server_ut 00:45:34.348 LINK iscsi_ut 00:45:34.348 CC test/unit/lib/log/log.c/log_ut.o 00:45:34.607 LINK log_ut 00:45:34.866 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:45:35.433 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:45:35.433 CC test/unit/lib/notify/notify.c/notify_ut.o 00:45:36.001 LINK notify_ut 00:45:36.938 LINK vbdev_zone_block_ut 00:45:36.938 LINK lvol_ut 00:45:37.197 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:45:37.765 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:45:38.024 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:45:39.401 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:45:40.338 LINK nvme_ut 00:45:40.910 LINK nvme_ctrlr_cmd_ut 00:45:41.478 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:45:42.045 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:45:42.045 LINK portal_grp_ut 00:45:42.045 LINK nvme_ctrlr_ut 00:45:42.304 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:45:42.304 LINK bdev_nvme_ut 00:45:43.241 LINK nvme_ctrlr_ocssd_cmd_ut 00:45:43.500 LINK nvme_ns_ut 00:45:44.437 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:45:46.341 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:45:46.341 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:45:46.601 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:45:46.601 LINK nvme_ns_cmd_ut 00:45:47.168 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:45:47.168 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:45:47.736 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:45:47.736 LINK tgt_node_ut 00:45:47.995 LINK dev_ut 00:45:48.254 LINK nvme_pcie_ut 00:45:48.254 LINK nvme_ns_ocssd_cmd_ut 00:45:48.513 LINK ctrlr_ut 00:45:48.772 LINK tcp_ut 00:45:49.032 CC test/nvme/cuse/cuse.o 00:45:49.032 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:45:49.633 LINK lun_ut 00:45:50.201 CC test/unit/lib/sock/sock.c/sock_ut.o 00:45:50.459 CC test/unit/lib/sock/posix.c/posix_ut.o 00:45:50.459 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:45:50.459 LINK cuse 00:45:51.394 LINK sock_ut 00:45:51.653 LINK posix_ut 00:45:51.653 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:45:52.220 LINK scsi_ut 00:45:52.220 LINK subsystem_ut 00:45:52.787 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:45:53.723 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:45:53.723 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:45:53.982 LINK nvme_poll_group_ut 00:45:54.548 LINK nvme_quirks_ut 00:45:54.548 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:45:54.548 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:45:54.807 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:45:54.807 LINK nvme_qpair_ut 00:45:54.807 CC test/unit/lib/thread/thread.c/thread_ut.o 00:45:55.064 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:45:55.064 LINK scsi_pr_ut 00:45:55.064 LINK scsi_bdev_ut 00:45:55.064 CC test/unit/lib/util/base64.c/base64_ut.o 00:45:55.322 LINK iobuf_ut 00:45:55.322 LINK base64_ut 00:45:55.322 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:45:55.580 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:45:55.580 LINK nvme_tcp_ut 00:45:55.839 LINK ctrlr_bdev_ut 00:45:55.839 LINK thread_ut 00:45:56.775 LINK ctrlr_discovery_ut 00:45:56.775 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:45:57.033 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:45:57.292 LINK bit_array_ut 00:45:57.292 LINK cpuset_ut 00:45:57.292 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:45:57.859 LINK pci_event_ut 00:45:58.118 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:45:58.377 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:45:58.636 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:45:58.636 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:45:58.636 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:45:58.636 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:45:58.636 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:45:58.636 LINK subsystem_ut 00:45:58.636 LINK nvme_transport_ut 00:45:58.636 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:45:58.636 LINK crc16_ut 00:45:58.895 LINK rpc_ut 00:45:58.895 LINK idxd_user_ut 00:45:59.153 LINK idxd_ut 00:45:59.153 LINK nvmf_ut 00:45:59.412 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:45:59.412 LINK crc32_ieee_ut 00:45:59.671 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:45:59.671 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:45:59.671 LINK rdma_ut 00:45:59.929 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:45:59.929 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:46:00.188 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:46:00.188 CC test/unit/lib/util/dif.c/dif_ut.o 00:46:00.188 LINK crc64_ut 00:46:00.188 LINK crc32c_ut 00:46:00.755 CC test/unit/lib/util/iov.c/iov_ut.o 00:46:00.755 LINK nvme_io_msg_ut 00:46:00.755 LINK nvme_pcie_common_ut 00:46:01.012 LINK dif_ut 00:46:01.012 LINK iov_ut 00:46:01.012 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:46:01.578 LINK vhost_ut 00:46:01.838 CC test/unit/lib/util/math.c/math_ut.o 00:46:02.096 LINK math_ut 00:46:02.355 CC test/unit/lib/rdma/common.c/common_ut.o 00:46:02.922 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:46:02.922 LINK common_ut 00:46:02.922 LINK transport_ut 00:46:03.181 LINK ftl_l2p_ut 00:46:03.181 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:46:03.181 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:46:03.748 LINK ftl_io_ut 00:46:04.007 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:46:04.266 LINK ftl_band_ut 00:46:04.266 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:46:04.266 LINK ftl_bitmap_ut 00:46:04.266 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:46:04.833 LINK pipe_ut 00:46:04.833 CC test/unit/lib/util/string.c/string_ut.o 00:46:05.092 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:46:05.351 LINK string_ut 00:46:05.351 LINK nvme_fabric_ut 00:46:05.351 LINK ftl_mempool_ut 00:46:05.919 CC test/unit/lib/util/xor.c/xor_ut.o 00:46:05.919 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:46:06.178 LINK xor_ut 00:46:06.178 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:46:06.448 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:46:06.448 LINK ftl_mngt_ut 00:46:06.448 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:46:06.448 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:46:06.730 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:46:06.730 LINK nvme_opal_ut 00:46:06.730 LINK ftl_sb_ut 00:46:06.989 LINK ftl_layout_upgrade_ut 00:46:07.248 LINK nvme_cuse_ut 00:46:07.507 LINK nvme_rdma_ut 00:46:54.182 json_parse_ut.c: In function ‘test_parse_nesting’: 00:46:54.182 json_parse_ut.c:616:1: note: variable tracking size limit exceeded with ‘-fvar-tracking-assignments’, retrying without 00:46:54.182 616 | test_parse_nesting(void) 00:46:54.182 | ^ 00:46:54.182 17:25:42 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:46:54.182 make[1]: Nothing to be done for 'clean'. 00:46:57.469 17:25:45 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:46:57.469 17:25:45 -- common/autotest_common.sh@728 -- $ xtrace_disable 00:46:57.469 17:25:45 -- common/autotest_common.sh@10 -- $ set +x 00:46:57.469 17:25:45 -- spdk/autopackage.sh@48 -- $ timing_finish 00:46:57.469 17:25:45 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:57.469 17:25:45 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:46:57.469 17:25:45 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:46:57.469 + [[ -n 2132 ]] 00:46:57.469 + sudo kill 2132 00:46:57.478 [Pipeline] } 00:46:57.494 [Pipeline] // timeout 00:46:57.499 [Pipeline] } 00:46:57.513 [Pipeline] // stage 00:46:57.518 [Pipeline] } 00:46:57.532 [Pipeline] // catchError 00:46:57.541 [Pipeline] stage 00:46:57.544 [Pipeline] { (Stop VM) 00:46:57.556 [Pipeline] sh 00:46:57.838 + vagrant halt 00:47:01.124 ==> default: Halting domain... 00:47:11.149 [Pipeline] sh 00:47:11.427 + vagrant destroy -f 00:47:13.959 ==> default: Removing domain... 00:47:14.910 [Pipeline] sh 00:47:15.190 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest_2/output 00:47:15.199 [Pipeline] } 00:47:15.214 [Pipeline] // stage 00:47:15.220 [Pipeline] } 00:47:15.234 [Pipeline] // dir 00:47:15.240 [Pipeline] } 00:47:15.254 [Pipeline] // wrap 00:47:15.261 [Pipeline] } 00:47:15.274 [Pipeline] // catchError 00:47:15.284 [Pipeline] stage 00:47:15.286 [Pipeline] { (Epilogue) 00:47:15.299 [Pipeline] sh 00:47:15.582 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:47:30.470 [Pipeline] catchError 00:47:30.472 [Pipeline] { 00:47:30.485 [Pipeline] sh 00:47:30.767 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:47:31.025 Artifacts sizes are good 00:47:31.033 [Pipeline] } 00:47:31.046 [Pipeline] // catchError 00:47:31.055 [Pipeline] archiveArtifacts 00:47:31.061 Archiving artifacts 00:47:31.327 [Pipeline] cleanWs 00:47:31.337 [WS-CLEANUP] Deleting project workspace... 00:47:31.337 [WS-CLEANUP] Deferred wipeout is used... 00:47:31.343 [WS-CLEANUP] done 00:47:31.344 [Pipeline] } 00:47:31.358 [Pipeline] // stage 00:47:31.362 [Pipeline] } 00:47:31.374 [Pipeline] // node 00:47:31.379 [Pipeline] End of Pipeline 00:47:31.408 Finished: SUCCESS